May 16 16:09:30.791217 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 16:09:30.791237 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri May 16 14:51:29 -00 2025 May 16 16:09:30.791247 kernel: KASLR enabled May 16 16:09:30.791252 kernel: efi: EFI v2.7 by EDK II May 16 16:09:30.791258 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 16 16:09:30.791263 kernel: random: crng init done May 16 16:09:30.791270 kernel: secureboot: Secure boot disabled May 16 16:09:30.791275 kernel: ACPI: Early table checksum verification disabled May 16 16:09:30.791288 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 16 16:09:30.791299 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 16:09:30.791306 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:30.791312 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:30.791318 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:30.791324 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:30.791331 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:30.791338 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:30.791344 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:30.791351 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:30.791357 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 16:09:30.791363 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 16:09:30.791369 kernel: ACPI: Use ACPI SPCR as default console: Yes May 16 16:09:30.791375 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 16:09:30.791381 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 16 16:09:30.791386 kernel: Zone ranges: May 16 16:09:30.791392 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 16:09:30.791399 kernel: DMA32 empty May 16 16:09:30.791405 kernel: Normal empty May 16 16:09:30.791411 kernel: Device empty May 16 16:09:30.791417 kernel: Movable zone start for each node May 16 16:09:30.791422 kernel: Early memory node ranges May 16 16:09:30.791428 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 16 16:09:30.791434 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 16 16:09:30.791440 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 16 16:09:30.791446 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 16 16:09:30.791452 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 16 16:09:30.791457 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 16 16:09:30.791463 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 16 16:09:30.791470 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 16 16:09:30.791476 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 16 16:09:30.791482 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 16 16:09:30.791491 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 16 16:09:30.791497 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 16 16:09:30.791504 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 16 16:09:30.791511 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 16:09:30.791518 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 16:09:30.791524 kernel: psci: probing for conduit method from ACPI. May 16 16:09:30.791530 kernel: psci: PSCIv1.1 detected in firmware. May 16 16:09:30.791536 kernel: psci: Using standard PSCI v0.2 function IDs May 16 16:09:30.791543 kernel: psci: Trusted OS migration not required May 16 16:09:30.791549 kernel: psci: SMC Calling Convention v1.1 May 16 16:09:30.791555 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 16:09:30.791562 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 16 16:09:30.791568 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 16 16:09:30.791576 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 16:09:30.791582 kernel: Detected PIPT I-cache on CPU0 May 16 16:09:30.791588 kernel: CPU features: detected: GIC system register CPU interface May 16 16:09:30.791594 kernel: CPU features: detected: Spectre-v4 May 16 16:09:30.791600 kernel: CPU features: detected: Spectre-BHB May 16 16:09:30.791607 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 16:09:30.791613 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 16:09:30.791619 kernel: CPU features: detected: ARM erratum 1418040 May 16 16:09:30.791626 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 16:09:30.791632 kernel: alternatives: applying boot alternatives May 16 16:09:30.791639 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a0bb4243d79ba36a710f39399156a0a3ffb1b3c5e7037b80b74649cdc67b3731 May 16 16:09:30.791647 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 16:09:30.791654 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 16:09:30.791660 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 16:09:30.791666 kernel: Fallback order for Node 0: 0 May 16 16:09:30.791673 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 16 16:09:30.791679 kernel: Policy zone: DMA May 16 16:09:30.791685 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 16:09:30.791691 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 16 16:09:30.791698 kernel: software IO TLB: area num 4. May 16 16:09:30.791704 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 16 16:09:30.791710 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 16 16:09:30.791716 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 16:09:30.791724 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 16:09:30.791731 kernel: rcu: RCU event tracing is enabled. May 16 16:09:30.791737 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 16:09:30.791744 kernel: Trampoline variant of Tasks RCU enabled. May 16 16:09:30.791750 kernel: Tracing variant of Tasks RCU enabled. May 16 16:09:30.791757 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 16:09:30.791763 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 16:09:30.791769 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:09:30.791776 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 16:09:30.791782 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 16:09:30.791788 kernel: GICv3: 256 SPIs implemented May 16 16:09:30.791796 kernel: GICv3: 0 Extended SPIs implemented May 16 16:09:30.791802 kernel: Root IRQ handler: gic_handle_irq May 16 16:09:30.791808 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 16 16:09:30.791815 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 16 16:09:30.791821 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 16:09:30.791827 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 16:09:30.791834 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 16 16:09:30.791840 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 16 16:09:30.791854 kernel: GICv3: using LPI property table @0x0000000040100000 May 16 16:09:30.791861 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 16 16:09:30.791867 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 16:09:30.791874 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 16:09:30.791882 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 16:09:30.791889 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 16:09:30.791896 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 16:09:30.791902 kernel: arm-pv: using stolen time PV May 16 16:09:30.791909 kernel: Console: colour dummy device 80x25 May 16 16:09:30.791915 kernel: ACPI: Core revision 20240827 May 16 16:09:30.791922 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 16:09:30.791928 kernel: pid_max: default: 32768 minimum: 301 May 16 16:09:30.791935 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 16:09:30.791943 kernel: landlock: Up and running. May 16 16:09:30.791949 kernel: SELinux: Initializing. May 16 16:09:30.791956 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:09:30.791962 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 16:09:30.791969 kernel: rcu: Hierarchical SRCU implementation. May 16 16:09:30.791975 kernel: rcu: Max phase no-delay instances is 400. May 16 16:09:30.791982 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 16:09:30.791989 kernel: Remapping and enabling EFI services. May 16 16:09:30.791995 kernel: smp: Bringing up secondary CPUs ... May 16 16:09:30.792002 kernel: Detected PIPT I-cache on CPU1 May 16 16:09:30.792014 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 16:09:30.792021 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 16 16:09:30.792030 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 16:09:30.792036 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 16:09:30.792043 kernel: Detected PIPT I-cache on CPU2 May 16 16:09:30.792050 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 16:09:30.792057 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 16 16:09:30.792065 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 16:09:30.792072 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 16:09:30.792079 kernel: Detected PIPT I-cache on CPU3 May 16 16:09:30.792086 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 16:09:30.792092 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 16 16:09:30.792099 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 16:09:30.792106 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 16:09:30.792113 kernel: smp: Brought up 1 node, 4 CPUs May 16 16:09:30.792119 kernel: SMP: Total of 4 processors activated. May 16 16:09:30.792126 kernel: CPU: All CPU(s) started at EL1 May 16 16:09:30.792134 kernel: CPU features: detected: 32-bit EL0 Support May 16 16:09:30.792141 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 16:09:30.792148 kernel: CPU features: detected: Common not Private translations May 16 16:09:30.792155 kernel: CPU features: detected: CRC32 instructions May 16 16:09:30.792162 kernel: CPU features: detected: Enhanced Virtualization Traps May 16 16:09:30.792169 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 16:09:30.792182 kernel: CPU features: detected: LSE atomic instructions May 16 16:09:30.792189 kernel: CPU features: detected: Privileged Access Never May 16 16:09:30.792196 kernel: CPU features: detected: RAS Extension Support May 16 16:09:30.792205 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 16:09:30.792212 kernel: alternatives: applying system-wide alternatives May 16 16:09:30.792219 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 16 16:09:30.792227 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 16 16:09:30.792233 kernel: devtmpfs: initialized May 16 16:09:30.792240 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 16:09:30.792247 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 16:09:30.792254 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 16 16:09:30.792261 kernel: 0 pages in range for non-PLT usage May 16 16:09:30.792269 kernel: 508544 pages in range for PLT usage May 16 16:09:30.792276 kernel: pinctrl core: initialized pinctrl subsystem May 16 16:09:30.792282 kernel: SMBIOS 3.0.0 present. May 16 16:09:30.792289 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 16 16:09:30.792296 kernel: DMI: Memory slots populated: 1/1 May 16 16:09:30.792303 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 16:09:30.792310 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 16:09:30.792317 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 16:09:30.792324 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 16:09:30.792332 kernel: audit: initializing netlink subsys (disabled) May 16 16:09:30.792339 kernel: audit: type=2000 audit(0.028:1): state=initialized audit_enabled=0 res=1 May 16 16:09:30.792346 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 16:09:30.792353 kernel: cpuidle: using governor menu May 16 16:09:30.792360 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 16:09:30.792366 kernel: ASID allocator initialised with 32768 entries May 16 16:09:30.792373 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 16:09:30.792380 kernel: Serial: AMBA PL011 UART driver May 16 16:09:30.792387 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 16:09:30.792395 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 16 16:09:30.792401 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 16 16:09:30.792408 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 16 16:09:30.792415 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 16:09:30.792422 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 16 16:09:30.792428 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 16 16:09:30.792435 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 16 16:09:30.792443 kernel: ACPI: Added _OSI(Module Device) May 16 16:09:30.792449 kernel: ACPI: Added _OSI(Processor Device) May 16 16:09:30.792458 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 16:09:30.792464 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 16:09:30.792471 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 16:09:30.792478 kernel: ACPI: Interpreter enabled May 16 16:09:30.792485 kernel: ACPI: Using GIC for interrupt routing May 16 16:09:30.792492 kernel: ACPI: MCFG table detected, 1 entries May 16 16:09:30.792498 kernel: ACPI: CPU0 has been hot-added May 16 16:09:30.792505 kernel: ACPI: CPU1 has been hot-added May 16 16:09:30.792512 kernel: ACPI: CPU2 has been hot-added May 16 16:09:30.792520 kernel: ACPI: CPU3 has been hot-added May 16 16:09:30.792527 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 16:09:30.792534 kernel: printk: legacy console [ttyAMA0] enabled May 16 16:09:30.792541 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 16:09:30.792669 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 16:09:30.792733 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 16:09:30.792790 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 16:09:30.792856 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 16:09:30.792921 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 16:09:30.792930 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 16:09:30.792937 kernel: PCI host bridge to bus 0000:00 May 16 16:09:30.793019 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 16:09:30.793076 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 16:09:30.793128 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 16:09:30.793192 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 16:09:30.793271 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 16 16:09:30.793341 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 16:09:30.793401 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 16 16:09:30.793460 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 16 16:09:30.793518 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 16 16:09:30.793577 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 16 16:09:30.793635 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 16 16:09:30.793696 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 16 16:09:30.793750 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 16:09:30.793801 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 16:09:30.793861 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 16:09:30.793871 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 16:09:30.793878 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 16:09:30.793885 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 16:09:30.793894 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 16:09:30.793901 kernel: iommu: Default domain type: Translated May 16 16:09:30.793908 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 16:09:30.793915 kernel: efivars: Registered efivars operations May 16 16:09:30.793921 kernel: vgaarb: loaded May 16 16:09:30.793928 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 16:09:30.793935 kernel: VFS: Disk quotas dquot_6.6.0 May 16 16:09:30.793942 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 16:09:30.793948 kernel: pnp: PnP ACPI init May 16 16:09:30.794020 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 16:09:30.794030 kernel: pnp: PnP ACPI: found 1 devices May 16 16:09:30.794037 kernel: NET: Registered PF_INET protocol family May 16 16:09:30.794044 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 16:09:30.794051 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 16:09:30.794057 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 16:09:30.794065 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 16:09:30.794071 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 16:09:30.794080 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 16:09:30.794087 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:09:30.794094 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 16:09:30.794101 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 16:09:30.794107 kernel: PCI: CLS 0 bytes, default 64 May 16 16:09:30.794114 kernel: kvm [1]: HYP mode not available May 16 16:09:30.794121 kernel: Initialise system trusted keyrings May 16 16:09:30.794129 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 16:09:30.794135 kernel: Key type asymmetric registered May 16 16:09:30.794143 kernel: Asymmetric key parser 'x509' registered May 16 16:09:30.794150 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 16:09:30.794157 kernel: io scheduler mq-deadline registered May 16 16:09:30.794164 kernel: io scheduler kyber registered May 16 16:09:30.794171 kernel: io scheduler bfq registered May 16 16:09:30.794189 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 16:09:30.794199 kernel: ACPI: button: Power Button [PWRB] May 16 16:09:30.794206 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 16:09:30.794278 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 16:09:30.794290 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 16:09:30.794298 kernel: thunder_xcv, ver 1.0 May 16 16:09:30.794305 kernel: thunder_bgx, ver 1.0 May 16 16:09:30.794312 kernel: nicpf, ver 1.0 May 16 16:09:30.794319 kernel: nicvf, ver 1.0 May 16 16:09:30.794388 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 16:09:30.794443 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T16:09:30 UTC (1747411770) May 16 16:09:30.794452 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 16:09:30.794462 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 16 16:09:30.794468 kernel: watchdog: NMI not fully supported May 16 16:09:30.794475 kernel: watchdog: Hard watchdog permanently disabled May 16 16:09:30.794482 kernel: NET: Registered PF_INET6 protocol family May 16 16:09:30.794489 kernel: Segment Routing with IPv6 May 16 16:09:30.794495 kernel: In-situ OAM (IOAM) with IPv6 May 16 16:09:30.794502 kernel: NET: Registered PF_PACKET protocol family May 16 16:09:30.794509 kernel: Key type dns_resolver registered May 16 16:09:30.794516 kernel: registered taskstats version 1 May 16 16:09:30.794524 kernel: Loading compiled-in X.509 certificates May 16 16:09:30.794531 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 27b8347ec414bf9dcd45b3eefdd645a09d039333' May 16 16:09:30.794538 kernel: Demotion targets for Node 0: null May 16 16:09:30.794544 kernel: Key type .fscrypt registered May 16 16:09:30.794551 kernel: Key type fscrypt-provisioning registered May 16 16:09:30.794558 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 16:09:30.794565 kernel: ima: Allocated hash algorithm: sha1 May 16 16:09:30.794572 kernel: ima: No architecture policies found May 16 16:09:30.794579 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 16:09:30.794586 kernel: clk: Disabling unused clocks May 16 16:09:30.794593 kernel: PM: genpd: Disabling unused power domains May 16 16:09:30.794600 kernel: Warning: unable to open an initial console. May 16 16:09:30.794607 kernel: Freeing unused kernel memory: 39424K May 16 16:09:30.794614 kernel: Run /init as init process May 16 16:09:30.794621 kernel: with arguments: May 16 16:09:30.794628 kernel: /init May 16 16:09:30.794634 kernel: with environment: May 16 16:09:30.794641 kernel: HOME=/ May 16 16:09:30.794649 kernel: TERM=linux May 16 16:09:30.794656 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 16:09:30.794663 systemd[1]: Successfully made /usr/ read-only. May 16 16:09:30.794673 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:09:30.794681 systemd[1]: Detected virtualization kvm. May 16 16:09:30.794689 systemd[1]: Detected architecture arm64. May 16 16:09:30.794696 systemd[1]: Running in initrd. May 16 16:09:30.794703 systemd[1]: No hostname configured, using default hostname. May 16 16:09:30.794712 systemd[1]: Hostname set to . May 16 16:09:30.794719 systemd[1]: Initializing machine ID from VM UUID. May 16 16:09:30.794726 systemd[1]: Queued start job for default target initrd.target. May 16 16:09:30.794734 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:09:30.794741 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:09:30.794749 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 16:09:30.794757 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:09:30.794764 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 16:09:30.794773 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 16:09:30.794782 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 16:09:30.794789 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 16:09:30.794797 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:09:30.794804 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:09:30.794811 systemd[1]: Reached target paths.target - Path Units. May 16 16:09:30.794820 systemd[1]: Reached target slices.target - Slice Units. May 16 16:09:30.794827 systemd[1]: Reached target swap.target - Swaps. May 16 16:09:30.794834 systemd[1]: Reached target timers.target - Timer Units. May 16 16:09:30.794848 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:09:30.794857 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:09:30.794864 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 16:09:30.794872 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 16:09:30.794879 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:09:30.794887 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:09:30.794896 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:09:30.794904 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:09:30.794911 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 16:09:30.794919 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:09:30.794926 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 16:09:30.794934 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 16:09:30.794941 systemd[1]: Starting systemd-fsck-usr.service... May 16 16:09:30.794948 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:09:30.794957 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:09:30.794964 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:09:30.794971 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 16:09:30.794979 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:09:30.794987 systemd[1]: Finished systemd-fsck-usr.service. May 16 16:09:30.795013 systemd-journald[245]: Collecting audit messages is disabled. May 16 16:09:30.795033 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 16:09:30.795041 systemd-journald[245]: Journal started May 16 16:09:30.795060 systemd-journald[245]: Runtime Journal (/run/log/journal/d2a5adba28f94ec5bc81fb36b4c7f2a4) is 6M, max 48.5M, 42.4M free. May 16 16:09:30.788345 systemd-modules-load[246]: Inserted module 'overlay' May 16 16:09:30.805780 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:09:30.805799 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 16:09:30.805809 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:09:30.806196 kernel: Bridge firewalling registered May 16 16:09:30.806371 systemd-modules-load[246]: Inserted module 'br_netfilter' May 16 16:09:30.807705 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:09:30.811250 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 16:09:30.813492 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 16:09:30.815262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:09:30.817729 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:09:30.825783 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:09:30.835213 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:09:30.835228 systemd-tmpfiles[271]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 16:09:30.837249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:09:30.839259 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:09:30.842282 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:09:30.845321 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:09:30.849725 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 16:09:30.864468 dracut-cmdline[291]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a0bb4243d79ba36a710f39399156a0a3ffb1b3c5e7037b80b74649cdc67b3731 May 16 16:09:30.880121 systemd-resolved[287]: Positive Trust Anchors: May 16 16:09:30.880141 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:09:30.880210 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:09:30.885101 systemd-resolved[287]: Defaulting to hostname 'linux'. May 16 16:09:30.886269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:09:30.889692 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:09:30.943207 kernel: SCSI subsystem initialized May 16 16:09:30.948199 kernel: Loading iSCSI transport class v2.0-870. May 16 16:09:30.955217 kernel: iscsi: registered transport (tcp) May 16 16:09:30.967415 kernel: iscsi: registered transport (qla4xxx) May 16 16:09:30.967434 kernel: QLogic iSCSI HBA Driver May 16 16:09:30.986372 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:09:31.003346 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:09:31.005906 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:09:31.047978 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 16:09:31.050249 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 16:09:31.112203 kernel: raid6: neonx8 gen() 15773 MB/s May 16 16:09:31.129186 kernel: raid6: neonx4 gen() 15808 MB/s May 16 16:09:31.146190 kernel: raid6: neonx2 gen() 13187 MB/s May 16 16:09:31.163198 kernel: raid6: neonx1 gen() 10467 MB/s May 16 16:09:31.180197 kernel: raid6: int64x8 gen() 6905 MB/s May 16 16:09:31.197198 kernel: raid6: int64x4 gen() 7354 MB/s May 16 16:09:31.214190 kernel: raid6: int64x2 gen() 6106 MB/s May 16 16:09:31.231189 kernel: raid6: int64x1 gen() 5059 MB/s May 16 16:09:31.231203 kernel: raid6: using algorithm neonx4 gen() 15808 MB/s May 16 16:09:31.248196 kernel: raid6: .... xor() 12349 MB/s, rmw enabled May 16 16:09:31.248211 kernel: raid6: using neon recovery algorithm May 16 16:09:31.254191 kernel: xor: measuring software checksum speed May 16 16:09:31.255218 kernel: 8regs : 19947 MB/sec May 16 16:09:31.255231 kernel: 32regs : 21260 MB/sec May 16 16:09:31.256195 kernel: arm64_neon : 27889 MB/sec May 16 16:09:31.256209 kernel: xor: using function: arm64_neon (27889 MB/sec) May 16 16:09:31.309208 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 16:09:31.315519 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 16:09:31.318003 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:09:31.350698 systemd-udevd[498]: Using default interface naming scheme 'v255'. May 16 16:09:31.354937 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:09:31.357382 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 16:09:31.388466 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation May 16 16:09:31.410234 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:09:31.412676 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:09:31.464344 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:09:31.467349 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 16:09:31.519297 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 16 16:09:31.525369 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 16:09:31.525558 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 16:09:31.525569 kernel: GPT:9289727 != 19775487 May 16 16:09:31.525577 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 16:09:31.525586 kernel: GPT:9289727 != 19775487 May 16 16:09:31.525594 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 16:09:31.525602 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:09:31.527860 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:09:31.527981 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:09:31.531146 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:09:31.533374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:09:31.557933 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 16:09:31.564232 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 16:09:31.565465 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:09:31.574374 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 16:09:31.581618 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:09:31.587529 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 16:09:31.588463 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 16:09:31.591068 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:09:31.593290 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:09:31.594997 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:09:31.597581 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 16:09:31.599361 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 16:09:31.612555 disk-uuid[593]: Primary Header is updated. May 16 16:09:31.612555 disk-uuid[593]: Secondary Entries is updated. May 16 16:09:31.612555 disk-uuid[593]: Secondary Header is updated. May 16 16:09:31.616201 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:09:31.617249 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 16:09:32.629206 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 16:09:32.629912 disk-uuid[599]: The operation has completed successfully. May 16 16:09:32.654764 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 16:09:32.654883 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 16:09:32.679938 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 16:09:32.705064 sh[613]: Success May 16 16:09:32.718508 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 16:09:32.718562 kernel: device-mapper: uevent: version 1.0.3 May 16 16:09:32.719325 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 16:09:32.728219 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 16 16:09:32.756961 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 16:09:32.759595 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 16:09:32.771325 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 16:09:32.775198 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 16:09:32.775225 kernel: BTRFS: device fsid 87f734d5-e9e0-4da0-9e65-ee17bdaa6a26 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (625) May 16 16:09:32.777649 kernel: BTRFS info (device dm-0): first mount of filesystem 87f734d5-e9e0-4da0-9e65-ee17bdaa6a26 May 16 16:09:32.777669 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 16 16:09:32.778335 kernel: BTRFS info (device dm-0): using free-space-tree May 16 16:09:32.782085 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 16:09:32.783442 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 16:09:32.784777 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 16:09:32.785618 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 16:09:32.787123 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 16:09:32.811238 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (656) May 16 16:09:32.812928 kernel: BTRFS info (device vda6): first mount of filesystem 2ff0c403-fbe0-45df-941b-f7dd331fa2eb May 16 16:09:32.812959 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 16:09:32.812969 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:09:32.819192 kernel: BTRFS info (device vda6): last unmount of filesystem 2ff0c403-fbe0-45df-941b-f7dd331fa2eb May 16 16:09:32.820032 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 16:09:32.822019 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 16:09:32.894056 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:09:32.898316 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:09:32.940447 systemd-networkd[799]: lo: Link UP May 16 16:09:32.940459 systemd-networkd[799]: lo: Gained carrier May 16 16:09:32.941137 systemd-networkd[799]: Enumeration completed May 16 16:09:32.941380 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:09:32.941880 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:09:32.941883 systemd-networkd[799]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:09:32.942659 systemd-networkd[799]: eth0: Link UP May 16 16:09:32.942662 systemd-networkd[799]: eth0: Gained carrier May 16 16:09:32.942670 systemd-networkd[799]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:09:32.943221 systemd[1]: Reached target network.target - Network. May 16 16:09:32.960218 systemd-networkd[799]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:09:32.965653 ignition[701]: Ignition 2.21.0 May 16 16:09:32.965664 ignition[701]: Stage: fetch-offline May 16 16:09:32.965696 ignition[701]: no configs at "/usr/lib/ignition/base.d" May 16 16:09:32.965702 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:32.965891 ignition[701]: parsed url from cmdline: "" May 16 16:09:32.965894 ignition[701]: no config URL provided May 16 16:09:32.965900 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" May 16 16:09:32.965906 ignition[701]: no config at "/usr/lib/ignition/user.ign" May 16 16:09:32.965925 ignition[701]: op(1): [started] loading QEMU firmware config module May 16 16:09:32.965929 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 16:09:32.977621 ignition[701]: op(1): [finished] loading QEMU firmware config module May 16 16:09:33.015831 ignition[701]: parsing config with SHA512: d6735d144fc68a5efe7d63ee1ade7aa87781e4f21ccb651b06617ef5e34d1255f14cea3d4e57f5219f34fb2c9601d4007c39d9188374c10078aab6035319c48b May 16 16:09:33.021900 unknown[701]: fetched base config from "system" May 16 16:09:33.021926 unknown[701]: fetched user config from "qemu" May 16 16:09:33.022363 ignition[701]: fetch-offline: fetch-offline passed May 16 16:09:33.024363 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:09:33.022423 ignition[701]: Ignition finished successfully May 16 16:09:33.025656 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 16:09:33.026428 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 16:09:33.050213 ignition[812]: Ignition 2.21.0 May 16 16:09:33.050228 ignition[812]: Stage: kargs May 16 16:09:33.050355 ignition[812]: no configs at "/usr/lib/ignition/base.d" May 16 16:09:33.050363 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:33.053682 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 16:09:33.051530 ignition[812]: kargs: kargs passed May 16 16:09:33.051580 ignition[812]: Ignition finished successfully May 16 16:09:33.056330 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 16:09:33.087373 ignition[820]: Ignition 2.21.0 May 16 16:09:33.087389 ignition[820]: Stage: disks May 16 16:09:33.087526 ignition[820]: no configs at "/usr/lib/ignition/base.d" May 16 16:09:33.087535 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:33.088258 ignition[820]: disks: disks passed May 16 16:09:33.090735 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 16:09:33.088303 ignition[820]: Ignition finished successfully May 16 16:09:33.092371 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 16:09:33.094015 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 16:09:33.095718 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:09:33.097561 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:09:33.099465 systemd[1]: Reached target basic.target - Basic System. May 16 16:09:33.101945 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 16:09:33.125733 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 16:09:33.131273 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 16:09:33.133321 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 16:09:33.207194 kernel: EXT4-fs (vda9): mounted filesystem 0ada590e-bc2d-44be-b1f0-1b069cf0a0c5 r/w with ordered data mode. Quota mode: none. May 16 16:09:33.207684 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 16:09:33.208699 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 16:09:33.211684 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:09:33.213265 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 16:09:33.214212 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 16:09:33.214252 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 16:09:33.214273 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:09:33.227313 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 16:09:33.229536 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 16:09:33.233603 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (839) May 16 16:09:33.233637 kernel: BTRFS info (device vda6): first mount of filesystem 2ff0c403-fbe0-45df-941b-f7dd331fa2eb May 16 16:09:33.233647 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 16:09:33.235191 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:09:33.237321 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:09:33.281305 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory May 16 16:09:33.285104 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory May 16 16:09:33.288524 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory May 16 16:09:33.292188 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory May 16 16:09:33.354530 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 16:09:33.356465 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 16:09:33.358007 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 16:09:33.375192 kernel: BTRFS info (device vda6): last unmount of filesystem 2ff0c403-fbe0-45df-941b-f7dd331fa2eb May 16 16:09:33.387275 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 16:09:33.398731 ignition[953]: INFO : Ignition 2.21.0 May 16 16:09:33.398731 ignition[953]: INFO : Stage: mount May 16 16:09:33.400782 ignition[953]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:09:33.400782 ignition[953]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:33.402776 ignition[953]: INFO : mount: mount passed May 16 16:09:33.402776 ignition[953]: INFO : Ignition finished successfully May 16 16:09:33.402689 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 16:09:33.404454 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 16:09:33.782844 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 16:09:33.784352 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 16:09:33.815185 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (966) May 16 16:09:33.816789 kernel: BTRFS info (device vda6): first mount of filesystem 2ff0c403-fbe0-45df-941b-f7dd331fa2eb May 16 16:09:33.816809 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 16:09:33.817418 kernel: BTRFS info (device vda6): using free-space-tree May 16 16:09:33.819696 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 16:09:33.845229 ignition[983]: INFO : Ignition 2.21.0 May 16 16:09:33.845229 ignition[983]: INFO : Stage: files May 16 16:09:33.847435 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:09:33.847435 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:33.847435 ignition[983]: DEBUG : files: compiled without relabeling support, skipping May 16 16:09:33.850641 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 16:09:33.850641 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 16:09:33.853514 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 16:09:33.853514 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 16:09:33.853514 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 16:09:33.853514 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 16:09:33.852209 unknown[983]: wrote ssh authorized keys file for user: core May 16 16:09:33.860009 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 16 16:09:34.840318 systemd-networkd[799]: eth0: Gained IPv6LL May 16 16:09:34.986801 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 16:09:38.800920 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 16 16:09:38.803022 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 16:09:38.803022 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 16:09:38.803022 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 16:09:38.803022 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 16:09:38.803022 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:09:38.803022 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 16:09:38.803022 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:09:38.803022 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 16:09:38.816285 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:09:38.816285 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 16:09:38.816285 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 16:09:38.816285 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 16:09:38.816285 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 16:09:38.816285 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 16 16:09:39.802424 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 16:09:40.497177 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 16:09:40.499577 ignition[983]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 16 16:09:40.499577 ignition[983]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:09:40.502779 ignition[983]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 16:09:40.502779 ignition[983]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 16 16:09:40.502779 ignition[983]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 16 16:09:40.502779 ignition[983]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:09:40.502779 ignition[983]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 16:09:40.502779 ignition[983]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 16 16:09:40.502779 ignition[983]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 16 16:09:40.517665 ignition[983]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:09:40.521337 ignition[983]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 16:09:40.524147 ignition[983]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 16 16:09:40.524147 ignition[983]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 16 16:09:40.524147 ignition[983]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 16 16:09:40.524147 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 16:09:40.524147 ignition[983]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 16:09:40.524147 ignition[983]: INFO : files: files passed May 16 16:09:40.524147 ignition[983]: INFO : Ignition finished successfully May 16 16:09:40.525246 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 16:09:40.527639 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 16:09:40.529556 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 16:09:40.546022 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 16:09:40.546120 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 16:09:40.549075 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory May 16 16:09:40.550377 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:09:40.550377 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 16:09:40.554298 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 16:09:40.551657 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:09:40.553111 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 16:09:40.555999 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 16:09:40.585473 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 16:09:40.585594 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 16:09:40.587774 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 16:09:40.589643 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 16:09:40.591416 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 16:09:40.592166 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 16:09:40.626209 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:09:40.628608 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 16:09:40.650919 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 16:09:40.652158 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:09:40.654204 systemd[1]: Stopped target timers.target - Timer Units. May 16 16:09:40.655976 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 16:09:40.656097 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 16:09:40.658582 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 16:09:40.660523 systemd[1]: Stopped target basic.target - Basic System. May 16 16:09:40.662100 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 16:09:40.663793 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 16:09:40.665696 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 16:09:40.667600 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 16:09:40.669456 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 16:09:40.671234 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 16:09:40.673185 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 16:09:40.675145 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 16:09:40.676884 systemd[1]: Stopped target swap.target - Swaps. May 16 16:09:40.678337 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 16:09:40.678469 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 16:09:40.680760 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 16:09:40.682671 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:09:40.684551 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 16:09:40.685257 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:09:40.686348 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 16:09:40.686474 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 16:09:40.689249 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 16:09:40.689375 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 16:09:40.691274 systemd[1]: Stopped target paths.target - Path Units. May 16 16:09:40.692901 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 16:09:40.697226 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:09:40.698480 systemd[1]: Stopped target slices.target - Slice Units. May 16 16:09:40.700508 systemd[1]: Stopped target sockets.target - Socket Units. May 16 16:09:40.702061 systemd[1]: iscsid.socket: Deactivated successfully. May 16 16:09:40.702144 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 16:09:40.703721 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 16:09:40.703809 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 16:09:40.705309 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 16:09:40.705425 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 16:09:40.707197 systemd[1]: ignition-files.service: Deactivated successfully. May 16 16:09:40.707298 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 16:09:40.709582 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 16:09:40.711986 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 16:09:40.713149 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 16:09:40.713279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:09:40.715100 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 16:09:40.715215 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 16:09:40.720283 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 16:09:40.724352 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 16:09:40.732313 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 16:09:40.737434 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 16:09:40.737547 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 16:09:40.739919 ignition[1038]: INFO : Ignition 2.21.0 May 16 16:09:40.739919 ignition[1038]: INFO : Stage: umount May 16 16:09:40.739919 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 16:09:40.739919 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 16:09:40.744512 ignition[1038]: INFO : umount: umount passed May 16 16:09:40.744512 ignition[1038]: INFO : Ignition finished successfully May 16 16:09:40.744103 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 16:09:40.744211 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 16:09:40.745504 systemd[1]: Stopped target network.target - Network. May 16 16:09:40.746865 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 16:09:40.746923 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 16:09:40.748550 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 16:09:40.748595 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 16:09:40.750161 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 16:09:40.750228 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 16:09:40.751870 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 16:09:40.751918 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 16:09:40.753487 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 16:09:40.753536 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 16:09:40.755331 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 16:09:40.757073 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 16:09:40.766828 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 16:09:40.766924 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 16:09:40.770961 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 16:09:40.771238 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 16:09:40.771355 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 16:09:40.774615 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 16:09:40.775080 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 16:09:40.777141 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 16:09:40.777208 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 16:09:40.779872 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 16:09:40.780738 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 16:09:40.780794 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 16:09:40.782914 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 16:09:40.782959 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 16:09:40.785520 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 16:09:40.785561 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 16:09:40.787714 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 16:09:40.787758 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:09:40.790452 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:09:40.793061 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 16:09:40.793116 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 16:09:40.819793 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 16:09:40.819954 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:09:40.822221 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 16:09:40.822304 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 16:09:40.823775 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 16:09:40.823849 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 16:09:40.825333 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 16:09:40.825366 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:09:40.827034 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 16:09:40.827088 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 16:09:40.829726 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 16:09:40.829774 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 16:09:40.832469 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 16:09:40.832530 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 16:09:40.836025 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 16:09:40.837223 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 16:09:40.837287 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:09:40.840108 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 16:09:40.840157 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:09:40.843464 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 16:09:40.843508 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 16:09:40.846580 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 16:09:40.846622 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:09:40.849079 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 16:09:40.849125 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:09:40.853082 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 16 16:09:40.853136 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 16 16:09:40.853164 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 16:09:40.853271 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 16 16:09:40.858130 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 16:09:40.859240 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 16:09:40.861457 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 16:09:40.864246 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 16:09:40.885619 systemd[1]: Switching root. May 16 16:09:40.910206 systemd-journald[245]: Journal stopped May 16 16:09:41.681061 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). May 16 16:09:41.681113 kernel: SELinux: policy capability network_peer_controls=1 May 16 16:09:41.681125 kernel: SELinux: policy capability open_perms=1 May 16 16:09:41.681139 kernel: SELinux: policy capability extended_socket_class=1 May 16 16:09:41.681150 kernel: SELinux: policy capability always_check_network=0 May 16 16:09:41.681161 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 16:09:41.681251 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 16:09:41.681267 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 16:09:41.681281 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 16:09:41.681290 kernel: SELinux: policy capability userspace_initial_context=0 May 16 16:09:41.681300 kernel: audit: type=1403 audit(1747411781.120:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 16:09:41.681310 systemd[1]: Successfully loaded SELinux policy in 50.802ms. May 16 16:09:41.681325 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.170ms. May 16 16:09:41.681340 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 16:09:41.681351 systemd[1]: Detected virtualization kvm. May 16 16:09:41.681363 systemd[1]: Detected architecture arm64. May 16 16:09:41.681373 systemd[1]: Detected first boot. May 16 16:09:41.681382 systemd[1]: Initializing machine ID from VM UUID. May 16 16:09:41.681392 zram_generator::config[1085]: No configuration found. May 16 16:09:41.681403 kernel: NET: Registered PF_VSOCK protocol family May 16 16:09:41.681413 systemd[1]: Populated /etc with preset unit settings. May 16 16:09:41.681424 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 16:09:41.681433 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 16:09:41.681445 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 16:09:41.681455 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 16:09:41.681464 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 16:09:41.681475 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 16:09:41.681484 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 16:09:41.681494 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 16:09:41.681504 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 16:09:41.681514 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 16:09:41.681525 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 16:09:41.681535 systemd[1]: Created slice user.slice - User and Session Slice. May 16 16:09:41.681545 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 16:09:41.681555 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 16:09:41.681565 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 16:09:41.681575 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 16:09:41.681585 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 16:09:41.681596 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 16:09:41.681606 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 16 16:09:41.681617 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 16:09:41.681628 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 16:09:41.681638 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 16:09:41.681647 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 16:09:41.681661 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 16:09:41.681671 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 16:09:41.681681 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 16:09:41.681691 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 16:09:41.681703 systemd[1]: Reached target slices.target - Slice Units. May 16 16:09:41.681713 systemd[1]: Reached target swap.target - Swaps. May 16 16:09:41.681723 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 16:09:41.681734 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 16:09:41.681744 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 16:09:41.681754 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 16:09:41.681765 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 16:09:41.681776 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 16:09:41.681786 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 16:09:41.681804 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 16:09:41.681816 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 16:09:41.681826 systemd[1]: Mounting media.mount - External Media Directory... May 16 16:09:41.681836 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 16:09:41.681845 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 16:09:41.681856 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 16:09:41.681866 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 16:09:41.681876 systemd[1]: Reached target machines.target - Containers. May 16 16:09:41.681886 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 16:09:41.681899 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:09:41.681909 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 16:09:41.681919 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 16:09:41.681929 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:09:41.681940 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:09:41.681950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:09:41.681960 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 16:09:41.681970 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:09:41.681982 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 16:09:41.681992 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 16:09:41.682002 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 16:09:41.682012 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 16:09:41.682021 systemd[1]: Stopped systemd-fsck-usr.service. May 16 16:09:41.682032 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:09:41.682042 kernel: loop: module loaded May 16 16:09:41.682052 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 16:09:41.682062 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 16:09:41.682072 kernel: fuse: init (API version 7.41) May 16 16:09:41.682083 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 16:09:41.682093 kernel: ACPI: bus type drm_connector registered May 16 16:09:41.682102 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 16:09:41.682112 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 16:09:41.682122 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 16:09:41.682134 systemd[1]: verity-setup.service: Deactivated successfully. May 16 16:09:41.682144 systemd[1]: Stopped verity-setup.service. May 16 16:09:41.682156 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 16:09:41.682166 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 16:09:41.682187 systemd[1]: Mounted media.mount - External Media Directory. May 16 16:09:41.682200 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 16:09:41.682210 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 16:09:41.682220 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 16:09:41.682230 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 16:09:41.682264 systemd-journald[1156]: Collecting audit messages is disabled. May 16 16:09:41.682285 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 16:09:41.682296 systemd-journald[1156]: Journal started May 16 16:09:41.682317 systemd-journald[1156]: Runtime Journal (/run/log/journal/d2a5adba28f94ec5bc81fb36b4c7f2a4) is 6M, max 48.5M, 42.4M free. May 16 16:09:41.481819 systemd[1]: Queued start job for default target multi-user.target. May 16 16:09:41.491993 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 16:09:41.492371 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 16:09:41.684213 systemd[1]: Started systemd-journald.service - Journal Service. May 16 16:09:41.684770 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 16:09:41.684961 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 16:09:41.686455 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:09:41.686608 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:09:41.687961 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:09:41.688102 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:09:41.691405 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:09:41.691562 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:09:41.692929 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 16:09:41.693073 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 16:09:41.694479 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:09:41.694634 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:09:41.696081 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 16:09:41.697463 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 16:09:41.700590 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 16:09:41.702217 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 16:09:41.708464 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 16:09:41.715772 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 16:09:41.718120 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 16:09:41.720169 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 16:09:41.721304 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 16:09:41.721341 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 16:09:41.723144 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 16:09:41.728058 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 16:09:41.729392 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:09:41.730624 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 16:09:41.732491 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 16:09:41.733703 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:09:41.735744 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 16:09:41.737152 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:09:41.739318 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 16:09:41.742346 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 16:09:41.742622 systemd-journald[1156]: Time spent on flushing to /var/log/journal/d2a5adba28f94ec5bc81fb36b4c7f2a4 is 14.341ms for 888 entries. May 16 16:09:41.742622 systemd-journald[1156]: System Journal (/var/log/journal/d2a5adba28f94ec5bc81fb36b4c7f2a4) is 8M, max 195.6M, 187.6M free. May 16 16:09:41.769623 systemd-journald[1156]: Received client request to flush runtime journal. May 16 16:09:41.745367 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 16:09:41.748338 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 16:09:41.749594 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 16:09:41.757480 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 16:09:41.762650 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 16:09:41.766104 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 16:09:41.770930 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 16:09:41.774226 kernel: loop0: detected capacity change from 0 to 138376 May 16 16:09:41.785883 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. May 16 16:09:41.785899 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. May 16 16:09:41.787438 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 16:09:41.792593 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 16:09:41.793197 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 16:09:41.796606 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 16:09:41.799656 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 16:09:41.823242 kernel: loop1: detected capacity change from 0 to 207008 May 16 16:09:41.837278 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 16:09:41.842320 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 16:09:41.853449 kernel: loop2: detected capacity change from 0 to 107312 May 16 16:09:41.868235 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. May 16 16:09:41.868249 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. May 16 16:09:41.873249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 16:09:41.878197 kernel: loop3: detected capacity change from 0 to 138376 May 16 16:09:41.886194 kernel: loop4: detected capacity change from 0 to 207008 May 16 16:09:41.891197 kernel: loop5: detected capacity change from 0 to 107312 May 16 16:09:41.894877 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 16:09:41.895293 (sd-merge)[1226]: Merged extensions into '/usr'. May 16 16:09:41.898441 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... May 16 16:09:41.898458 systemd[1]: Reloading... May 16 16:09:41.948199 zram_generator::config[1249]: No configuration found. May 16 16:09:42.024449 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:09:42.032193 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 16:09:42.086981 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 16:09:42.087162 systemd[1]: Reloading finished in 188 ms. May 16 16:09:42.115663 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 16:09:42.117087 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 16:09:42.131396 systemd[1]: Starting ensure-sysext.service... May 16 16:09:42.133209 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 16:09:42.145359 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... May 16 16:09:42.145377 systemd[1]: Reloading... May 16 16:09:42.151719 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 16:09:42.152231 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 16:09:42.152568 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 16:09:42.152889 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 16:09:42.153624 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 16:09:42.153990 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. May 16 16:09:42.154107 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. May 16 16:09:42.163917 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:09:42.163986 systemd-tmpfiles[1287]: Skipping /boot May 16 16:09:42.173158 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. May 16 16:09:42.173275 systemd-tmpfiles[1287]: Skipping /boot May 16 16:09:42.200231 zram_generator::config[1314]: No configuration found. May 16 16:09:42.262133 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:09:42.324429 systemd[1]: Reloading finished in 178 ms. May 16 16:09:42.334656 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 16:09:42.351691 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 16:09:42.359096 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:09:42.361453 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 16:09:42.368993 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 16:09:42.372122 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 16:09:42.377326 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 16:09:42.380451 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 16:09:42.385958 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:09:42.387067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:09:42.393113 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:09:42.395612 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:09:42.396688 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:09:42.396824 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:09:42.401610 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 16:09:42.406213 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 16:09:42.408041 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:09:42.408236 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:09:42.410134 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:09:42.410313 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:09:42.412420 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:09:42.412585 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:09:42.421024 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:09:42.423491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:09:42.425637 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:09:42.428530 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:09:42.429619 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:09:42.429779 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:09:42.435326 systemd-udevd[1355]: Using default interface naming scheme 'v255'. May 16 16:09:42.440506 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 16:09:42.443426 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 16:09:42.447443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:09:42.453344 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:09:42.455833 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 16:09:42.462232 augenrules[1390]: No rules May 16 16:09:42.458929 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 16:09:42.460903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:09:42.462566 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:09:42.464902 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:09:42.465393 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:09:42.466971 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:09:42.467299 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:09:42.470207 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 16:09:42.481235 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 16:09:42.503243 systemd[1]: Finished ensure-sysext.service. May 16 16:09:42.511406 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:09:42.513415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 16:09:42.516372 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 16:09:42.518379 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 16:09:42.526858 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 16:09:42.529631 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 16:09:42.531364 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 16:09:42.531410 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 16:09:42.532993 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 16:09:42.539393 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 16:09:42.543765 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 16:09:42.544294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 16:09:42.544472 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 16:09:42.545875 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 16:09:42.546043 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 16:09:42.552193 augenrules[1432]: /sbin/augenrules: No change May 16 16:09:42.552079 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 16 16:09:42.562012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 16:09:42.562193 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 16:09:42.567360 augenrules[1465]: No rules May 16 16:09:42.571665 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:09:42.572546 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:09:42.573077 systemd-resolved[1353]: Positive Trust Anchors: May 16 16:09:42.573094 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 16:09:42.573126 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 16:09:42.575683 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 16:09:42.575863 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 16:09:42.580873 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 16:09:42.581085 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 16:09:42.582456 systemd-resolved[1353]: Defaulting to hostname 'linux'. May 16 16:09:42.583906 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 16:09:42.585757 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 16:09:42.618383 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 16:09:42.627524 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 16:09:42.631416 systemd-networkd[1446]: lo: Link UP May 16 16:09:42.631422 systemd-networkd[1446]: lo: Gained carrier May 16 16:09:42.635032 systemd-networkd[1446]: Enumeration completed May 16 16:09:42.635147 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 16:09:42.636419 systemd[1]: Reached target network.target - Network. May 16 16:09:42.639382 systemd-networkd[1446]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:09:42.639390 systemd-networkd[1446]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 16:09:42.639820 systemd-networkd[1446]: eth0: Link UP May 16 16:09:42.639943 systemd-networkd[1446]: eth0: Gained carrier May 16 16:09:42.639956 systemd-networkd[1446]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 16:09:42.644193 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 16:09:42.646640 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 16:09:42.648013 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 16:09:42.649524 systemd[1]: Reached target sysinit.target - System Initialization. May 16 16:09:42.650731 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 16:09:42.653036 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 16:09:42.654225 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 16:09:42.655349 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 16:09:42.655380 systemd[1]: Reached target paths.target - Path Units. May 16 16:09:42.656227 systemd[1]: Reached target time-set.target - System Time Set. May 16 16:09:42.657300 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 16:09:42.658418 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 16:09:42.659600 systemd[1]: Reached target timers.target - Timer Units. May 16 16:09:42.661320 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 16:09:42.662233 systemd-networkd[1446]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 16:09:42.665646 systemd-timesyncd[1449]: Network configuration changed, trying to establish connection. May 16 16:09:42.666135 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 16:09:43.120165 systemd-resolved[1353]: Clock change detected. Flushing caches. May 16 16:09:43.120185 systemd-timesyncd[1449]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 16:09:43.120282 systemd-timesyncd[1449]: Initial clock synchronization to Fri 2025-05-16 16:09:43.120111 UTC. May 16 16:09:43.121155 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 16:09:43.122849 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 16:09:43.124596 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 16:09:43.127660 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 16:09:43.130259 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 16:09:43.133905 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 16:09:43.135986 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 16:09:43.137533 systemd[1]: Reached target sockets.target - Socket Units. May 16 16:09:43.140032 systemd[1]: Reached target basic.target - Basic System. May 16 16:09:43.140772 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 16:09:43.140804 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 16:09:43.157249 systemd[1]: Starting containerd.service - containerd container runtime... May 16 16:09:43.160904 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 16:09:43.169423 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 16:09:43.172053 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 16:09:43.175277 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 16:09:43.176287 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 16:09:43.184940 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 16:09:43.189627 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 16:09:43.192308 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 16:09:43.195747 jq[1498]: false May 16 16:09:43.195474 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 16:09:43.200093 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 16:09:43.201106 extend-filesystems[1499]: Found loop3 May 16 16:09:43.201106 extend-filesystems[1499]: Found loop4 May 16 16:09:43.201106 extend-filesystems[1499]: Found loop5 May 16 16:09:43.201106 extend-filesystems[1499]: Found vda May 16 16:09:43.201106 extend-filesystems[1499]: Found vda1 May 16 16:09:43.201106 extend-filesystems[1499]: Found vda2 May 16 16:09:43.201106 extend-filesystems[1499]: Found vda3 May 16 16:09:43.201106 extend-filesystems[1499]: Found usr May 16 16:09:43.201106 extend-filesystems[1499]: Found vda4 May 16 16:09:43.201106 extend-filesystems[1499]: Found vda6 May 16 16:09:43.201106 extend-filesystems[1499]: Found vda7 May 16 16:09:43.201106 extend-filesystems[1499]: Found vda9 May 16 16:09:43.201106 extend-filesystems[1499]: Checking size of /dev/vda9 May 16 16:09:43.201799 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 16:09:43.202223 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 16:09:43.204576 systemd[1]: Starting update-engine.service - Update Engine... May 16 16:09:43.220281 extend-filesystems[1499]: Resized partition /dev/vda9 May 16 16:09:43.207003 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 16:09:43.215240 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 16:09:43.221295 jq[1511]: true May 16 16:09:43.220478 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 16:09:43.222691 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 16:09:43.222894 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 16:09:43.223155 systemd[1]: motdgen.service: Deactivated successfully. May 16 16:09:43.223308 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 16:09:43.225513 extend-filesystems[1521]: resize2fs 1.47.2 (1-Jan-2025) May 16 16:09:43.227217 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 16:09:43.230902 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 16:09:43.235743 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 16:09:43.251907 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 16:09:43.259261 (ntainerd)[1526]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 16:09:43.264515 jq[1525]: true May 16 16:09:43.267765 extend-filesystems[1521]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 16:09:43.267765 extend-filesystems[1521]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 16:09:43.267765 extend-filesystems[1521]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 16:09:43.276943 extend-filesystems[1499]: Resized filesystem in /dev/vda9 May 16 16:09:43.274011 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 16:09:43.275928 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 16:09:43.290357 tar[1523]: linux-arm64/LICENSE May 16 16:09:43.290357 tar[1523]: linux-arm64/helm May 16 16:09:43.300140 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 16:09:43.314967 update_engine[1510]: I20250516 16:09:43.314692 1510 main.cc:92] Flatcar Update Engine starting May 16 16:09:43.322963 systemd-logind[1508]: Watching system buttons on /dev/input/event0 (Power Button) May 16 16:09:43.325861 systemd-logind[1508]: New seat seat0. May 16 16:09:43.327391 systemd[1]: Started systemd-logind.service - User Login Management. May 16 16:09:43.328914 dbus-daemon[1496]: [system] SELinux support is enabled May 16 16:09:43.329464 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 16:09:43.332589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 16:09:43.332760 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 16:09:43.334866 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 16:09:43.334899 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 16:09:43.342637 dbus-daemon[1496]: [system] Successfully activated service 'org.freedesktop.systemd1' May 16 16:09:43.347424 update_engine[1510]: I20250516 16:09:43.344534 1510 update_check_scheduler.cc:74] Next update check in 8m56s May 16 16:09:43.344708 systemd[1]: Started update-engine.service - Update Engine. May 16 16:09:43.347625 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 16:09:43.355934 bash[1558]: Updated "/home/core/.ssh/authorized_keys" May 16 16:09:43.364928 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 16:09:43.367537 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 16:09:43.378569 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 16:09:43.420744 locksmithd[1559]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 16:09:43.490938 containerd[1526]: time="2025-05-16T16:09:43Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 16:09:43.493023 containerd[1526]: time="2025-05-16T16:09:43.492991139Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 16:09:43.501553 containerd[1526]: time="2025-05-16T16:09:43.501505899Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="17.76µs" May 16 16:09:43.501553 containerd[1526]: time="2025-05-16T16:09:43.501549219Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 16:09:43.501627 containerd[1526]: time="2025-05-16T16:09:43.501574139Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 16:09:43.501738 containerd[1526]: time="2025-05-16T16:09:43.501718059Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 16:09:43.501777 containerd[1526]: time="2025-05-16T16:09:43.501739499Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 16:09:43.501777 containerd[1526]: time="2025-05-16T16:09:43.501766739Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:09:43.501874 containerd[1526]: time="2025-05-16T16:09:43.501817419Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 16:09:43.501874 containerd[1526]: time="2025-05-16T16:09:43.501835379Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:09:43.502137 containerd[1526]: time="2025-05-16T16:09:43.502110579Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 16:09:43.502163 containerd[1526]: time="2025-05-16T16:09:43.502137259Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:09:43.502163 containerd[1526]: time="2025-05-16T16:09:43.502150099Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 16:09:43.502163 containerd[1526]: time="2025-05-16T16:09:43.502162339Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 16:09:43.502261 containerd[1526]: time="2025-05-16T16:09:43.502243019Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 16:09:43.502458 containerd[1526]: time="2025-05-16T16:09:43.502435939Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:09:43.502488 containerd[1526]: time="2025-05-16T16:09:43.502475499Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 16:09:43.502515 containerd[1526]: time="2025-05-16T16:09:43.502489739Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 16:09:43.502542 containerd[1526]: time="2025-05-16T16:09:43.502531739Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 16:09:43.502994 containerd[1526]: time="2025-05-16T16:09:43.502910459Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 16:09:43.502994 containerd[1526]: time="2025-05-16T16:09:43.502989579Z" level=info msg="metadata content store policy set" policy=shared May 16 16:09:43.506247 containerd[1526]: time="2025-05-16T16:09:43.506216619Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 16:09:43.506312 containerd[1526]: time="2025-05-16T16:09:43.506258979Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 16:09:43.506312 containerd[1526]: time="2025-05-16T16:09:43.506272299Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 16:09:43.506312 containerd[1526]: time="2025-05-16T16:09:43.506283499Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 16:09:43.506312 containerd[1526]: time="2025-05-16T16:09:43.506295099Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 16:09:43.506312 containerd[1526]: time="2025-05-16T16:09:43.506306859Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506317859Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506354819Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506374499Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506388499Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506397619Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506409259Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506536379Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506556619Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506570659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506580539Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506591259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506601259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506616059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506627979Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 16:09:43.507585 containerd[1526]: time="2025-05-16T16:09:43.506638339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 16:09:43.507853 containerd[1526]: time="2025-05-16T16:09:43.506647859Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 16:09:43.507853 containerd[1526]: time="2025-05-16T16:09:43.506657299Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 16:09:43.507853 containerd[1526]: time="2025-05-16T16:09:43.506834659Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 16:09:43.507853 containerd[1526]: time="2025-05-16T16:09:43.506848899Z" level=info msg="Start snapshots syncer" May 16 16:09:43.507853 containerd[1526]: time="2025-05-16T16:09:43.506906499Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 16:09:43.507955 containerd[1526]: time="2025-05-16T16:09:43.507208699Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 16:09:43.507955 containerd[1526]: time="2025-05-16T16:09:43.507257859Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507335699Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507458819Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507481299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507500699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507515579Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507528259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507538499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507548259Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507577859Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507588339Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507604699Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507644979Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507660299Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 16:09:43.508066 containerd[1526]: time="2025-05-16T16:09:43.507668779Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:09:43.508278 containerd[1526]: time="2025-05-16T16:09:43.507678539Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 16:09:43.508278 containerd[1526]: time="2025-05-16T16:09:43.507685979Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 16:09:43.508278 containerd[1526]: time="2025-05-16T16:09:43.507694659Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 16:09:43.508278 containerd[1526]: time="2025-05-16T16:09:43.507704699Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 16:09:43.508278 containerd[1526]: time="2025-05-16T16:09:43.507779099Z" level=info msg="runtime interface created" May 16 16:09:43.508278 containerd[1526]: time="2025-05-16T16:09:43.507783939Z" level=info msg="created NRI interface" May 16 16:09:43.508278 containerd[1526]: time="2025-05-16T16:09:43.507791099Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 16:09:43.508278 containerd[1526]: time="2025-05-16T16:09:43.507801219Z" level=info msg="Connect containerd service" May 16 16:09:43.508278 containerd[1526]: time="2025-05-16T16:09:43.507827299Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 16:09:43.508534 containerd[1526]: time="2025-05-16T16:09:43.508415899Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 16:09:43.516369 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 16:09:43.613347 containerd[1526]: time="2025-05-16T16:09:43.613239139Z" level=info msg="Start subscribing containerd event" May 16 16:09:43.613520 containerd[1526]: time="2025-05-16T16:09:43.613484499Z" level=info msg="Start recovering state" May 16 16:09:43.613664 containerd[1526]: time="2025-05-16T16:09:43.613275819Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 16:09:43.613855 containerd[1526]: time="2025-05-16T16:09:43.613830579Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 16:09:43.613982 containerd[1526]: time="2025-05-16T16:09:43.613763139Z" level=info msg="Start event monitor" May 16 16:09:43.616950 containerd[1526]: time="2025-05-16T16:09:43.616923419Z" level=info msg="Start cni network conf syncer for default" May 16 16:09:43.617034 containerd[1526]: time="2025-05-16T16:09:43.617021939Z" level=info msg="Start streaming server" May 16 16:09:43.617079 containerd[1526]: time="2025-05-16T16:09:43.617068779Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 16:09:43.617137 containerd[1526]: time="2025-05-16T16:09:43.617125739Z" level=info msg="runtime interface starting up..." May 16 16:09:43.617189 containerd[1526]: time="2025-05-16T16:09:43.617177259Z" level=info msg="starting plugins..." May 16 16:09:43.617242 containerd[1526]: time="2025-05-16T16:09:43.617231699Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 16:09:43.617421 containerd[1526]: time="2025-05-16T16:09:43.617403339Z" level=info msg="containerd successfully booted in 0.126817s" May 16 16:09:43.617513 systemd[1]: Started containerd.service - containerd container runtime. May 16 16:09:43.714632 tar[1523]: linux-arm64/README.md May 16 16:09:43.732918 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 16:09:44.576574 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 16:09:44.594755 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 16:09:44.597448 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 16:09:44.599296 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:49782.service - OpenSSH per-connection server daemon (10.0.0.1:49782). May 16 16:09:44.612207 systemd[1]: issuegen.service: Deactivated successfully. May 16 16:09:44.612413 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 16:09:44.614924 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 16:09:44.637099 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 16:09:44.640318 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 16:09:44.642835 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 16 16:09:44.644686 systemd[1]: Reached target getty.target - Login Prompts. May 16 16:09:44.677363 sshd[1602]: Accepted publickey for core from 10.0.0.1 port 49782 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:44.678917 sshd-session[1602]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:44.690272 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 16:09:44.693092 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 16:09:44.695936 systemd-logind[1508]: New session 1 of user core. May 16 16:09:44.720908 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 16:09:44.723980 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 16:09:44.736496 (systemd)[1613]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 16:09:44.738455 systemd-logind[1508]: New session c1 of user core. May 16 16:09:44.763984 systemd-networkd[1446]: eth0: Gained IPv6LL May 16 16:09:44.765707 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 16:09:44.767904 systemd[1]: Reached target network-online.target - Network is Online. May 16 16:09:44.770333 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 16:09:44.772555 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:09:44.791317 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 16:09:44.805782 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 16:09:44.807965 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 16:09:44.810229 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 16:09:44.813102 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 16:09:44.849587 systemd[1613]: Queued start job for default target default.target. May 16 16:09:44.859000 systemd[1613]: Created slice app.slice - User Application Slice. May 16 16:09:44.859032 systemd[1613]: Reached target paths.target - Paths. May 16 16:09:44.859100 systemd[1613]: Reached target timers.target - Timers. May 16 16:09:44.860371 systemd[1613]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 16:09:44.869320 systemd[1613]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 16:09:44.869379 systemd[1613]: Reached target sockets.target - Sockets. May 16 16:09:44.869415 systemd[1613]: Reached target basic.target - Basic System. May 16 16:09:44.869443 systemd[1613]: Reached target default.target - Main User Target. May 16 16:09:44.869467 systemd[1613]: Startup finished in 126ms. May 16 16:09:44.869721 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 16:09:44.882031 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 16:09:44.940037 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:49788.service - OpenSSH per-connection server daemon (10.0.0.1:49788). May 16 16:09:44.998709 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 49788 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:44.999946 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:45.004806 systemd-logind[1508]: New session 2 of user core. May 16 16:09:45.018039 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 16:09:45.072058 sshd[1644]: Connection closed by 10.0.0.1 port 49788 May 16 16:09:45.072433 sshd-session[1642]: pam_unix(sshd:session): session closed for user core May 16 16:09:45.084930 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:49788.service: Deactivated successfully. May 16 16:09:45.088057 systemd[1]: session-2.scope: Deactivated successfully. May 16 16:09:45.091052 systemd-logind[1508]: Session 2 logged out. Waiting for processes to exit. May 16 16:09:45.093852 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:49800.service - OpenSSH per-connection server daemon (10.0.0.1:49800). May 16 16:09:45.096108 systemd-logind[1508]: Removed session 2. May 16 16:09:45.153119 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 49800 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:45.154372 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:45.160356 systemd-logind[1508]: New session 3 of user core. May 16 16:09:45.166050 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 16:09:45.217290 sshd[1652]: Connection closed by 10.0.0.1 port 49800 May 16 16:09:45.217745 sshd-session[1650]: pam_unix(sshd:session): session closed for user core May 16 16:09:45.221552 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:49800.service: Deactivated successfully. May 16 16:09:45.225072 systemd[1]: session-3.scope: Deactivated successfully. May 16 16:09:45.226555 systemd-logind[1508]: Session 3 logged out. Waiting for processes to exit. May 16 16:09:45.228076 systemd-logind[1508]: Removed session 3. May 16 16:09:45.358641 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:09:45.360178 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 16:09:45.364159 (kubelet)[1662]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:09:45.366062 systemd[1]: Startup finished in 2.097s (kernel) + 10.498s (initrd) + 3.846s (userspace) = 16.442s. May 16 16:09:45.757872 kubelet[1662]: E0516 16:09:45.757815 1662 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:09:45.760345 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:09:45.760479 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:09:45.760766 systemd[1]: kubelet.service: Consumed 792ms CPU time, 256.1M memory peak. May 16 16:09:55.231949 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:59556.service - OpenSSH per-connection server daemon (10.0.0.1:59556). May 16 16:09:55.270250 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 59556 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:55.271344 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:55.275777 systemd-logind[1508]: New session 4 of user core. May 16 16:09:55.287023 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 16:09:55.336591 sshd[1677]: Connection closed by 10.0.0.1 port 59556 May 16 16:09:55.337054 sshd-session[1675]: pam_unix(sshd:session): session closed for user core May 16 16:09:55.346576 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:59556.service: Deactivated successfully. May 16 16:09:55.348283 systemd[1]: session-4.scope: Deactivated successfully. May 16 16:09:55.348935 systemd-logind[1508]: Session 4 logged out. Waiting for processes to exit. May 16 16:09:55.351771 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:59560.service - OpenSSH per-connection server daemon (10.0.0.1:59560). May 16 16:09:55.352255 systemd-logind[1508]: Removed session 4. May 16 16:09:55.399595 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 59560 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:55.400788 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:55.405020 systemd-logind[1508]: New session 5 of user core. May 16 16:09:55.415125 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 16:09:55.462864 sshd[1685]: Connection closed by 10.0.0.1 port 59560 May 16 16:09:55.463146 sshd-session[1683]: pam_unix(sshd:session): session closed for user core May 16 16:09:55.472793 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:59560.service: Deactivated successfully. May 16 16:09:55.475292 systemd[1]: session-5.scope: Deactivated successfully. May 16 16:09:55.475963 systemd-logind[1508]: Session 5 logged out. Waiting for processes to exit. May 16 16:09:55.478465 systemd[1]: Started sshd@5-10.0.0.50:22-10.0.0.1:59570.service - OpenSSH per-connection server daemon (10.0.0.1:59570). May 16 16:09:55.479968 systemd-logind[1508]: Removed session 5. May 16 16:09:55.525956 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 59570 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:55.527041 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:55.530949 systemd-logind[1508]: New session 6 of user core. May 16 16:09:55.545026 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 16:09:55.595103 sshd[1693]: Connection closed by 10.0.0.1 port 59570 May 16 16:09:55.595554 sshd-session[1691]: pam_unix(sshd:session): session closed for user core May 16 16:09:55.607975 systemd[1]: sshd@5-10.0.0.50:22-10.0.0.1:59570.service: Deactivated successfully. May 16 16:09:55.610079 systemd[1]: session-6.scope: Deactivated successfully. May 16 16:09:55.610693 systemd-logind[1508]: Session 6 logged out. Waiting for processes to exit. May 16 16:09:55.612721 systemd[1]: Started sshd@6-10.0.0.50:22-10.0.0.1:59580.service - OpenSSH per-connection server daemon (10.0.0.1:59580). May 16 16:09:55.613531 systemd-logind[1508]: Removed session 6. May 16 16:09:55.665229 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 59580 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:55.666311 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:55.669947 systemd-logind[1508]: New session 7 of user core. May 16 16:09:55.677027 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 16:09:55.737988 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 16:09:55.738252 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:09:55.751430 sudo[1702]: pam_unix(sudo:session): session closed for user root May 16 16:09:55.754586 sshd[1701]: Connection closed by 10.0.0.1 port 59580 May 16 16:09:55.755097 sshd-session[1699]: pam_unix(sshd:session): session closed for user core May 16 16:09:55.768852 systemd[1]: sshd@6-10.0.0.50:22-10.0.0.1:59580.service: Deactivated successfully. May 16 16:09:55.771276 systemd[1]: session-7.scope: Deactivated successfully. May 16 16:09:55.772129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 16:09:55.774572 systemd-logind[1508]: Session 7 logged out. Waiting for processes to exit. May 16 16:09:55.775089 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:09:55.776193 systemd[1]: Started sshd@7-10.0.0.50:22-10.0.0.1:59584.service - OpenSSH per-connection server daemon (10.0.0.1:59584). May 16 16:09:55.778761 systemd-logind[1508]: Removed session 7. May 16 16:09:55.828111 sshd[1709]: Accepted publickey for core from 10.0.0.1 port 59584 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:55.829813 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:55.834234 systemd-logind[1508]: New session 8 of user core. May 16 16:09:55.845060 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 16:09:55.896540 sudo[1717]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 16:09:55.897114 sudo[1717]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:09:55.916357 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:09:55.919955 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:09:55.969457 sudo[1717]: pam_unix(sudo:session): session closed for user root May 16 16:09:55.974548 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 16:09:55.975128 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:09:55.983993 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 16:09:56.003425 kubelet[1722]: E0516 16:09:56.003349 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:09:56.007172 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:09:56.007299 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:09:56.007863 systemd[1]: kubelet.service: Consumed 148ms CPU time, 107.8M memory peak. May 16 16:09:56.021545 augenrules[1750]: No rules May 16 16:09:56.022672 systemd[1]: audit-rules.service: Deactivated successfully. May 16 16:09:56.022930 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 16:09:56.024063 sudo[1716]: pam_unix(sudo:session): session closed for user root May 16 16:09:56.025264 sshd[1713]: Connection closed by 10.0.0.1 port 59584 May 16 16:09:56.025670 sshd-session[1709]: pam_unix(sshd:session): session closed for user core May 16 16:09:56.034755 systemd[1]: sshd@7-10.0.0.50:22-10.0.0.1:59584.service: Deactivated successfully. May 16 16:09:56.036322 systemd[1]: session-8.scope: Deactivated successfully. May 16 16:09:56.038459 systemd-logind[1508]: Session 8 logged out. Waiting for processes to exit. May 16 16:09:56.041807 systemd[1]: Started sshd@8-10.0.0.50:22-10.0.0.1:59586.service - OpenSSH per-connection server daemon (10.0.0.1:59586). May 16 16:09:56.042501 systemd-logind[1508]: Removed session 8. May 16 16:09:56.097599 sshd[1759]: Accepted publickey for core from 10.0.0.1 port 59586 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:09:56.098701 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:09:56.102939 systemd-logind[1508]: New session 9 of user core. May 16 16:09:56.109012 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 16:09:56.158591 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 16:09:56.159162 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 16:09:56.523826 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 16:09:56.535167 (dockerd)[1783]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 16:09:56.794428 dockerd[1783]: time="2025-05-16T16:09:56.794300819Z" level=info msg="Starting up" May 16 16:09:56.795918 dockerd[1783]: time="2025-05-16T16:09:56.795201819Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 16:09:56.875752 dockerd[1783]: time="2025-05-16T16:09:56.875691979Z" level=info msg="Loading containers: start." May 16 16:09:56.905532 kernel: Initializing XFRM netlink socket May 16 16:09:57.100993 systemd-networkd[1446]: docker0: Link UP May 16 16:09:57.104420 dockerd[1783]: time="2025-05-16T16:09:57.104379699Z" level=info msg="Loading containers: done." May 16 16:09:57.117053 dockerd[1783]: time="2025-05-16T16:09:57.117010859Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 16:09:57.117170 dockerd[1783]: time="2025-05-16T16:09:57.117088419Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 16:09:57.117200 dockerd[1783]: time="2025-05-16T16:09:57.117177979Z" level=info msg="Initializing buildkit" May 16 16:09:57.136343 dockerd[1783]: time="2025-05-16T16:09:57.136304539Z" level=info msg="Completed buildkit initialization" May 16 16:09:57.142833 dockerd[1783]: time="2025-05-16T16:09:57.142787899Z" level=info msg="Daemon has completed initialization" May 16 16:09:57.142916 dockerd[1783]: time="2025-05-16T16:09:57.142852939Z" level=info msg="API listen on /run/docker.sock" May 16 16:09:57.143121 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 16:09:57.922332 containerd[1526]: time="2025-05-16T16:09:57.922282859Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 16 16:09:58.775687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884447775.mount: Deactivated successfully. May 16 16:09:59.832638 containerd[1526]: time="2025-05-16T16:09:59.832585539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:59.833215 containerd[1526]: time="2025-05-16T16:09:59.833179979Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326313" May 16 16:09:59.833920 containerd[1526]: time="2025-05-16T16:09:59.833890939Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:59.836173 containerd[1526]: time="2025-05-16T16:09:59.836115539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:09:59.837166 containerd[1526]: time="2025-05-16T16:09:59.837144619Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 1.91482168s" May 16 16:09:59.837225 containerd[1526]: time="2025-05-16T16:09:59.837175419Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 16 16:09:59.837859 containerd[1526]: time="2025-05-16T16:09:59.837696699Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 16 16:10:01.070221 containerd[1526]: time="2025-05-16T16:10:01.070160299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:01.072268 containerd[1526]: time="2025-05-16T16:10:01.072195299Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530549" May 16 16:10:01.072911 containerd[1526]: time="2025-05-16T16:10:01.072889899Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:01.077867 containerd[1526]: time="2025-05-16T16:10:01.077810739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:01.079421 containerd[1526]: time="2025-05-16T16:10:01.079366939Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 1.241624s" May 16 16:10:01.079531 containerd[1526]: time="2025-05-16T16:10:01.079396739Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 16 16:10:01.080001 containerd[1526]: time="2025-05-16T16:10:01.079980619Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 16 16:10:02.198599 containerd[1526]: time="2025-05-16T16:10:02.198540539Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:02.199025 containerd[1526]: time="2025-05-16T16:10:02.198943299Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484192" May 16 16:10:02.199995 containerd[1526]: time="2025-05-16T16:10:02.199961019Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:02.204599 containerd[1526]: time="2025-05-16T16:10:02.204562419Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:02.205558 containerd[1526]: time="2025-05-16T16:10:02.205505459Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 1.12541468s" May 16 16:10:02.205591 containerd[1526]: time="2025-05-16T16:10:02.205556579Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 16 16:10:02.206129 containerd[1526]: time="2025-05-16T16:10:02.206090219Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 16:10:03.268625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983258642.mount: Deactivated successfully. May 16 16:10:03.600138 containerd[1526]: time="2025-05-16T16:10:03.600092339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:03.601031 containerd[1526]: time="2025-05-16T16:10:03.600847299Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377377" May 16 16:10:03.601715 containerd[1526]: time="2025-05-16T16:10:03.601681579Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:03.603788 containerd[1526]: time="2025-05-16T16:10:03.603755499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:03.604549 containerd[1526]: time="2025-05-16T16:10:03.604393779Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.39826788s" May 16 16:10:03.604549 containerd[1526]: time="2025-05-16T16:10:03.604433099Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 16 16:10:03.605046 containerd[1526]: time="2025-05-16T16:10:03.604874179Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 16:10:04.166319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4005400551.mount: Deactivated successfully. May 16 16:10:04.860902 containerd[1526]: time="2025-05-16T16:10:04.860219579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:04.860902 containerd[1526]: time="2025-05-16T16:10:04.860678579Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 16 16:10:04.862541 containerd[1526]: time="2025-05-16T16:10:04.862512339Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:04.865169 containerd[1526]: time="2025-05-16T16:10:04.865139619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:04.867137 containerd[1526]: time="2025-05-16T16:10:04.867102459Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.26218404s" May 16 16:10:04.867137 containerd[1526]: time="2025-05-16T16:10:04.867134379Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 16 16:10:04.867543 containerd[1526]: time="2025-05-16T16:10:04.867520579Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 16:10:05.272097 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1337931476.mount: Deactivated successfully. May 16 16:10:05.276017 containerd[1526]: time="2025-05-16T16:10:05.275948739Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:10:05.276755 containerd[1526]: time="2025-05-16T16:10:05.276719179Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 16 16:10:05.277191 containerd[1526]: time="2025-05-16T16:10:05.277157899Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:10:05.278957 containerd[1526]: time="2025-05-16T16:10:05.278926059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 16:10:05.279499 containerd[1526]: time="2025-05-16T16:10:05.279459579Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 411.90864ms" May 16 16:10:05.279499 containerd[1526]: time="2025-05-16T16:10:05.279489539Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 16 16:10:05.279979 containerd[1526]: time="2025-05-16T16:10:05.279953939Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 16 16:10:05.851156 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3836684480.mount: Deactivated successfully. May 16 16:10:06.070185 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 16:10:06.071544 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:10:06.190793 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:10:06.194030 (kubelet)[2178]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 16:10:06.238929 kubelet[2178]: E0516 16:10:06.238856 2178 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 16:10:06.242404 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 16:10:06.242540 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 16:10:06.243076 systemd[1]: kubelet.service: Consumed 134ms CPU time, 108M memory peak. May 16 16:10:07.537090 containerd[1526]: time="2025-05-16T16:10:07.537034899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:07.538803 containerd[1526]: time="2025-05-16T16:10:07.538735859Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 16 16:10:07.539512 containerd[1526]: time="2025-05-16T16:10:07.539464899Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:07.542674 containerd[1526]: time="2025-05-16T16:10:07.542638619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:07.543899 containerd[1526]: time="2025-05-16T16:10:07.543853579Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.26386528s" May 16 16:10:07.543999 containerd[1526]: time="2025-05-16T16:10:07.543984659Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 16 16:10:13.070478 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:10:13.070616 systemd[1]: kubelet.service: Consumed 134ms CPU time, 108M memory peak. May 16 16:10:13.072442 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:10:13.090687 systemd[1]: Reload requested from client PID 2221 ('systemctl') (unit session-9.scope)... May 16 16:10:13.090707 systemd[1]: Reloading... May 16 16:10:13.151928 zram_generator::config[2263]: No configuration found. May 16 16:10:13.215054 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:10:13.297459 systemd[1]: Reloading finished in 206 ms. May 16 16:10:13.329839 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 16:10:13.329929 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 16:10:13.331009 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:10:13.332711 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:10:13.457024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:10:13.460723 (kubelet)[2307]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:10:13.494259 kubelet[2307]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:10:13.494259 kubelet[2307]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 16:10:13.494259 kubelet[2307]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:10:13.494600 kubelet[2307]: I0516 16:10:13.494317 2307 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:10:13.934689 kubelet[2307]: I0516 16:10:13.934647 2307 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 16:10:13.934689 kubelet[2307]: I0516 16:10:13.934676 2307 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:10:13.934971 kubelet[2307]: I0516 16:10:13.934951 2307 server.go:954] "Client rotation is on, will bootstrap in background" May 16 16:10:14.048108 kubelet[2307]: E0516 16:10:14.047175 2307 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.50:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 16 16:10:14.048870 kubelet[2307]: I0516 16:10:14.048772 2307 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:10:14.055937 kubelet[2307]: I0516 16:10:14.055919 2307 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:10:14.059161 kubelet[2307]: I0516 16:10:14.059099 2307 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:10:14.060217 kubelet[2307]: I0516 16:10:14.060181 2307 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:10:14.060389 kubelet[2307]: I0516 16:10:14.060217 2307 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:10:14.060487 kubelet[2307]: I0516 16:10:14.060467 2307 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:10:14.060487 kubelet[2307]: I0516 16:10:14.060477 2307 container_manager_linux.go:304] "Creating device plugin manager" May 16 16:10:14.060681 kubelet[2307]: I0516 16:10:14.060658 2307 state_mem.go:36] "Initialized new in-memory state store" May 16 16:10:14.064715 kubelet[2307]: I0516 16:10:14.064697 2307 kubelet.go:446] "Attempting to sync node with API server" May 16 16:10:14.064765 kubelet[2307]: I0516 16:10:14.064722 2307 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:10:14.064765 kubelet[2307]: I0516 16:10:14.064744 2307 kubelet.go:352] "Adding apiserver pod source" May 16 16:10:14.064765 kubelet[2307]: I0516 16:10:14.064754 2307 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:10:14.067569 kubelet[2307]: I0516 16:10:14.067224 2307 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:10:14.067569 kubelet[2307]: W0516 16:10:14.067452 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 16 16:10:14.067569 kubelet[2307]: E0516 16:10:14.067511 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.50:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 16 16:10:14.067704 kubelet[2307]: W0516 16:10:14.067672 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 16 16:10:14.067728 kubelet[2307]: E0516 16:10:14.067713 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.50:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 16 16:10:14.067847 kubelet[2307]: I0516 16:10:14.067814 2307 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 16:10:14.067965 kubelet[2307]: W0516 16:10:14.067948 2307 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 16:10:14.068828 kubelet[2307]: I0516 16:10:14.068812 2307 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 16:10:14.068865 kubelet[2307]: I0516 16:10:14.068844 2307 server.go:1287] "Started kubelet" May 16 16:10:14.068951 kubelet[2307]: I0516 16:10:14.068927 2307 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:10:14.069903 kubelet[2307]: I0516 16:10:14.069886 2307 server.go:479] "Adding debug handlers to kubelet server" May 16 16:10:14.075978 kubelet[2307]: I0516 16:10:14.075952 2307 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:10:14.077482 kubelet[2307]: I0516 16:10:14.076670 2307 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:10:14.077482 kubelet[2307]: I0516 16:10:14.076758 2307 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 16:10:14.077482 kubelet[2307]: E0516 16:10:14.076987 2307 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:10:14.077482 kubelet[2307]: I0516 16:10:14.077277 2307 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 16:10:14.077482 kubelet[2307]: I0516 16:10:14.077261 2307 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:10:14.077482 kubelet[2307]: I0516 16:10:14.077331 2307 reconciler.go:26] "Reconciler: start to sync state" May 16 16:10:14.077679 kubelet[2307]: I0516 16:10:14.077565 2307 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:10:14.077825 kubelet[2307]: W0516 16:10:14.077792 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 16 16:10:14.077868 kubelet[2307]: E0516 16:10:14.077843 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.50:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 16 16:10:14.079013 kubelet[2307]: E0516 16:10:14.077940 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="200ms" May 16 16:10:14.081580 kubelet[2307]: E0516 16:10:14.077949 2307 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.50:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.50:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18400dc965e4a9b3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 16:10:14.068824499 +0000 UTC m=+0.605250721,LastTimestamp:2025-05-16 16:10:14.068824499 +0000 UTC m=+0.605250721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 16:10:14.082059 kubelet[2307]: I0516 16:10:14.082029 2307 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:10:14.082707 kubelet[2307]: E0516 16:10:14.082683 2307 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 16:10:14.083900 kubelet[2307]: I0516 16:10:14.082955 2307 factory.go:221] Registration of the containerd container factory successfully May 16 16:10:14.083900 kubelet[2307]: I0516 16:10:14.082985 2307 factory.go:221] Registration of the systemd container factory successfully May 16 16:10:14.094746 kubelet[2307]: I0516 16:10:14.094725 2307 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 16:10:14.095677 kubelet[2307]: I0516 16:10:14.095441 2307 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 16:10:14.095677 kubelet[2307]: I0516 16:10:14.095471 2307 state_mem.go:36] "Initialized new in-memory state store" May 16 16:10:14.095677 kubelet[2307]: I0516 16:10:14.095479 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 16:10:14.096468 kubelet[2307]: I0516 16:10:14.096435 2307 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 16:10:14.096468 kubelet[2307]: I0516 16:10:14.096458 2307 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 16:10:14.096537 kubelet[2307]: I0516 16:10:14.096477 2307 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 16:10:14.096537 kubelet[2307]: I0516 16:10:14.096485 2307 kubelet.go:2382] "Starting kubelet main sync loop" May 16 16:10:14.096537 kubelet[2307]: E0516 16:10:14.096523 2307 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:10:14.097676 kubelet[2307]: I0516 16:10:14.097603 2307 policy_none.go:49] "None policy: Start" May 16 16:10:14.097676 kubelet[2307]: I0516 16:10:14.097624 2307 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 16:10:14.097676 kubelet[2307]: I0516 16:10:14.097634 2307 state_mem.go:35] "Initializing new in-memory state store" May 16 16:10:14.099855 kubelet[2307]: W0516 16:10:14.099817 2307 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.50:6443: connect: connection refused May 16 16:10:14.100159 kubelet[2307]: E0516 16:10:14.099858 2307 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.50:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.50:6443: connect: connection refused" logger="UnhandledError" May 16 16:10:14.102981 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 16:10:14.119801 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 16:10:14.122817 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 16:10:14.142741 kubelet[2307]: I0516 16:10:14.142718 2307 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 16:10:14.143440 kubelet[2307]: I0516 16:10:14.143381 2307 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:10:14.143440 kubelet[2307]: I0516 16:10:14.143402 2307 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:10:14.143671 kubelet[2307]: I0516 16:10:14.143650 2307 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:10:14.145107 kubelet[2307]: E0516 16:10:14.145060 2307 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 16:10:14.145107 kubelet[2307]: E0516 16:10:14.145099 2307 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 16:10:14.205743 systemd[1]: Created slice kubepods-burstable-pod3b0ad5a476987a221f8c6760552a0216.slice - libcontainer container kubepods-burstable-pod3b0ad5a476987a221f8c6760552a0216.slice. May 16 16:10:14.225926 kubelet[2307]: E0516 16:10:14.225816 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:10:14.227696 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 16 16:10:14.246222 kubelet[2307]: I0516 16:10:14.246189 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:10:14.246811 kubelet[2307]: E0516 16:10:14.246788 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 16 16:10:14.247999 kubelet[2307]: E0516 16:10:14.247979 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:10:14.250903 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 16 16:10:14.252600 kubelet[2307]: E0516 16:10:14.252579 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:10:14.278895 kubelet[2307]: I0516 16:10:14.278857 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 16:10:14.279153 kubelet[2307]: I0516 16:10:14.278993 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b0ad5a476987a221f8c6760552a0216-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b0ad5a476987a221f8c6760552a0216\") " pod="kube-system/kube-apiserver-localhost" May 16 16:10:14.279153 kubelet[2307]: I0516 16:10:14.279017 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:10:14.279153 kubelet[2307]: I0516 16:10:14.279032 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:10:14.279153 kubelet[2307]: I0516 16:10:14.279048 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:10:14.279153 kubelet[2307]: I0516 16:10:14.279064 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b0ad5a476987a221f8c6760552a0216-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b0ad5a476987a221f8c6760552a0216\") " pod="kube-system/kube-apiserver-localhost" May 16 16:10:14.279295 kubelet[2307]: I0516 16:10:14.279081 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b0ad5a476987a221f8c6760552a0216-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b0ad5a476987a221f8c6760552a0216\") " pod="kube-system/kube-apiserver-localhost" May 16 16:10:14.279295 kubelet[2307]: I0516 16:10:14.279098 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:10:14.279295 kubelet[2307]: I0516 16:10:14.279114 2307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:10:14.279448 kubelet[2307]: E0516 16:10:14.279346 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="400ms" May 16 16:10:14.448846 kubelet[2307]: I0516 16:10:14.448813 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:10:14.449163 kubelet[2307]: E0516 16:10:14.449139 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 16 16:10:14.526720 kubelet[2307]: E0516 16:10:14.526617 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:14.527806 containerd[1526]: time="2025-05-16T16:10:14.527764579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b0ad5a476987a221f8c6760552a0216,Namespace:kube-system,Attempt:0,}" May 16 16:10:14.543873 containerd[1526]: time="2025-05-16T16:10:14.542910699Z" level=info msg="connecting to shim 29b99e1316e114c328bd7b10d57cf88fa4706179ec8ab6d12ca8e39c575c0330" address="unix:///run/containerd/s/7f3217889df874113b161362227a346d14562c274ff89f49534e49696c9c6ea4" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:14.549296 kubelet[2307]: E0516 16:10:14.549265 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:14.549790 containerd[1526]: time="2025-05-16T16:10:14.549760619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 16 16:10:14.553217 kubelet[2307]: E0516 16:10:14.553199 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:14.553998 containerd[1526]: time="2025-05-16T16:10:14.553966739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 16 16:10:14.566325 systemd[1]: Started cri-containerd-29b99e1316e114c328bd7b10d57cf88fa4706179ec8ab6d12ca8e39c575c0330.scope - libcontainer container 29b99e1316e114c328bd7b10d57cf88fa4706179ec8ab6d12ca8e39c575c0330. May 16 16:10:14.574401 containerd[1526]: time="2025-05-16T16:10:14.574343379Z" level=info msg="connecting to shim ea1d52854b3185236f93c0d1ac6e85e8d64385c55cfa11969ae3433a8b302807" address="unix:///run/containerd/s/fe3c231606969038e2378cfbc327d52996a599b95cfbd45c0df704777de18f18" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:14.580776 containerd[1526]: time="2025-05-16T16:10:14.580692899Z" level=info msg="connecting to shim 24fbba468bbe3fed962f876d3f1b7a58df53a516ac59d237eb1d5ce01394a139" address="unix:///run/containerd/s/94fbfda1c072ee807bbad9a6c9becc48e80ec15b001e0ea6239758fd107fbbe4" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:14.603460 systemd[1]: Started cri-containerd-ea1d52854b3185236f93c0d1ac6e85e8d64385c55cfa11969ae3433a8b302807.scope - libcontainer container ea1d52854b3185236f93c0d1ac6e85e8d64385c55cfa11969ae3433a8b302807. May 16 16:10:14.606012 systemd[1]: Started cri-containerd-24fbba468bbe3fed962f876d3f1b7a58df53a516ac59d237eb1d5ce01394a139.scope - libcontainer container 24fbba468bbe3fed962f876d3f1b7a58df53a516ac59d237eb1d5ce01394a139. May 16 16:10:14.615219 containerd[1526]: time="2025-05-16T16:10:14.615181579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3b0ad5a476987a221f8c6760552a0216,Namespace:kube-system,Attempt:0,} returns sandbox id \"29b99e1316e114c328bd7b10d57cf88fa4706179ec8ab6d12ca8e39c575c0330\"" May 16 16:10:14.616579 kubelet[2307]: E0516 16:10:14.616553 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:14.619180 containerd[1526]: time="2025-05-16T16:10:14.619122899Z" level=info msg="CreateContainer within sandbox \"29b99e1316e114c328bd7b10d57cf88fa4706179ec8ab6d12ca8e39c575c0330\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 16:10:14.629675 containerd[1526]: time="2025-05-16T16:10:14.629637619Z" level=info msg="Container e7c06f1f95daf8ffefc8868fa3a978734f5d85f86c5064dbc8d8bd6467339f22: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:14.640367 containerd[1526]: time="2025-05-16T16:10:14.640317299Z" level=info msg="CreateContainer within sandbox \"29b99e1316e114c328bd7b10d57cf88fa4706179ec8ab6d12ca8e39c575c0330\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e7c06f1f95daf8ffefc8868fa3a978734f5d85f86c5064dbc8d8bd6467339f22\"" May 16 16:10:14.641197 containerd[1526]: time="2025-05-16T16:10:14.640924179Z" level=info msg="StartContainer for \"e7c06f1f95daf8ffefc8868fa3a978734f5d85f86c5064dbc8d8bd6467339f22\"" May 16 16:10:14.642220 containerd[1526]: time="2025-05-16T16:10:14.642052699Z" level=info msg="connecting to shim e7c06f1f95daf8ffefc8868fa3a978734f5d85f86c5064dbc8d8bd6467339f22" address="unix:///run/containerd/s/7f3217889df874113b161362227a346d14562c274ff89f49534e49696c9c6ea4" protocol=ttrpc version=3 May 16 16:10:14.646456 containerd[1526]: time="2025-05-16T16:10:14.646399299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea1d52854b3185236f93c0d1ac6e85e8d64385c55cfa11969ae3433a8b302807\"" May 16 16:10:14.647314 kubelet[2307]: E0516 16:10:14.647287 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:14.649303 containerd[1526]: time="2025-05-16T16:10:14.649267659Z" level=info msg="CreateContainer within sandbox \"ea1d52854b3185236f93c0d1ac6e85e8d64385c55cfa11969ae3433a8b302807\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 16:10:14.656113 containerd[1526]: time="2025-05-16T16:10:14.656080979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"24fbba468bbe3fed962f876d3f1b7a58df53a516ac59d237eb1d5ce01394a139\"" May 16 16:10:14.657135 kubelet[2307]: E0516 16:10:14.657111 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:14.658844 containerd[1526]: time="2025-05-16T16:10:14.658816059Z" level=info msg="CreateContainer within sandbox \"24fbba468bbe3fed962f876d3f1b7a58df53a516ac59d237eb1d5ce01394a139\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 16:10:14.661190 containerd[1526]: time="2025-05-16T16:10:14.661137699Z" level=info msg="Container ffcb93e77bae3cb15adc79e8a31c4a4474c1e67ad1e3d2a9f5ec6ffd7c682439: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:14.662040 systemd[1]: Started cri-containerd-e7c06f1f95daf8ffefc8868fa3a978734f5d85f86c5064dbc8d8bd6467339f22.scope - libcontainer container e7c06f1f95daf8ffefc8868fa3a978734f5d85f86c5064dbc8d8bd6467339f22. May 16 16:10:14.668613 containerd[1526]: time="2025-05-16T16:10:14.668561859Z" level=info msg="CreateContainer within sandbox \"ea1d52854b3185236f93c0d1ac6e85e8d64385c55cfa11969ae3433a8b302807\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ffcb93e77bae3cb15adc79e8a31c4a4474c1e67ad1e3d2a9f5ec6ffd7c682439\"" May 16 16:10:14.670062 containerd[1526]: time="2025-05-16T16:10:14.670026219Z" level=info msg="StartContainer for \"ffcb93e77bae3cb15adc79e8a31c4a4474c1e67ad1e3d2a9f5ec6ffd7c682439\"" May 16 16:10:14.670489 containerd[1526]: time="2025-05-16T16:10:14.670443059Z" level=info msg="Container 41905f98c1e36c4be35f72b86d18d755583d389d18d465bf56913ab3474d3c8f: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:14.672327 containerd[1526]: time="2025-05-16T16:10:14.672298899Z" level=info msg="connecting to shim ffcb93e77bae3cb15adc79e8a31c4a4474c1e67ad1e3d2a9f5ec6ffd7c682439" address="unix:///run/containerd/s/fe3c231606969038e2378cfbc327d52996a599b95cfbd45c0df704777de18f18" protocol=ttrpc version=3 May 16 16:10:14.681137 containerd[1526]: time="2025-05-16T16:10:14.681101659Z" level=info msg="CreateContainer within sandbox \"24fbba468bbe3fed962f876d3f1b7a58df53a516ac59d237eb1d5ce01394a139\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"41905f98c1e36c4be35f72b86d18d755583d389d18d465bf56913ab3474d3c8f\"" May 16 16:10:14.682057 kubelet[2307]: E0516 16:10:14.682024 2307 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.50:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.50:6443: connect: connection refused" interval="800ms" May 16 16:10:14.682590 containerd[1526]: time="2025-05-16T16:10:14.682552859Z" level=info msg="StartContainer for \"41905f98c1e36c4be35f72b86d18d755583d389d18d465bf56913ab3474d3c8f\"" May 16 16:10:14.685299 containerd[1526]: time="2025-05-16T16:10:14.685256579Z" level=info msg="connecting to shim 41905f98c1e36c4be35f72b86d18d755583d389d18d465bf56913ab3474d3c8f" address="unix:///run/containerd/s/94fbfda1c072ee807bbad9a6c9becc48e80ec15b001e0ea6239758fd107fbbe4" protocol=ttrpc version=3 May 16 16:10:14.687019 systemd[1]: Started cri-containerd-ffcb93e77bae3cb15adc79e8a31c4a4474c1e67ad1e3d2a9f5ec6ffd7c682439.scope - libcontainer container ffcb93e77bae3cb15adc79e8a31c4a4474c1e67ad1e3d2a9f5ec6ffd7c682439. May 16 16:10:14.711731 containerd[1526]: time="2025-05-16T16:10:14.711679379Z" level=info msg="StartContainer for \"e7c06f1f95daf8ffefc8868fa3a978734f5d85f86c5064dbc8d8bd6467339f22\" returns successfully" May 16 16:10:14.711858 systemd[1]: Started cri-containerd-41905f98c1e36c4be35f72b86d18d755583d389d18d465bf56913ab3474d3c8f.scope - libcontainer container 41905f98c1e36c4be35f72b86d18d755583d389d18d465bf56913ab3474d3c8f. May 16 16:10:14.755103 containerd[1526]: time="2025-05-16T16:10:14.755062419Z" level=info msg="StartContainer for \"ffcb93e77bae3cb15adc79e8a31c4a4474c1e67ad1e3d2a9f5ec6ffd7c682439\" returns successfully" May 16 16:10:14.793601 containerd[1526]: time="2025-05-16T16:10:14.792743739Z" level=info msg="StartContainer for \"41905f98c1e36c4be35f72b86d18d755583d389d18d465bf56913ab3474d3c8f\" returns successfully" May 16 16:10:14.852453 kubelet[2307]: I0516 16:10:14.852401 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:10:14.852814 kubelet[2307]: E0516 16:10:14.852727 2307 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.50:6443/api/v1/nodes\": dial tcp 10.0.0.50:6443: connect: connection refused" node="localhost" May 16 16:10:15.106908 kubelet[2307]: E0516 16:10:15.106864 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:10:15.107296 kubelet[2307]: E0516 16:10:15.107267 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:15.107296 kubelet[2307]: E0516 16:10:15.107292 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:10:15.107430 kubelet[2307]: E0516 16:10:15.107407 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:15.107578 kubelet[2307]: E0516 16:10:15.107557 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:10:15.107670 kubelet[2307]: E0516 16:10:15.107652 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:15.654691 kubelet[2307]: I0516 16:10:15.654662 2307 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:10:16.110097 kubelet[2307]: E0516 16:10:16.110062 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:10:16.110221 kubelet[2307]: E0516 16:10:16.110109 2307 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 16 16:10:16.110221 kubelet[2307]: E0516 16:10:16.110185 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:16.110221 kubelet[2307]: E0516 16:10:16.110204 2307 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:16.419218 kubelet[2307]: E0516 16:10:16.418845 2307 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 16:10:16.473306 kubelet[2307]: I0516 16:10:16.473265 2307 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 16:10:16.478883 kubelet[2307]: I0516 16:10:16.478162 2307 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 16:10:16.489625 kubelet[2307]: E0516 16:10:16.489596 2307 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 16 16:10:16.489625 kubelet[2307]: I0516 16:10:16.489627 2307 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:10:16.491314 kubelet[2307]: E0516 16:10:16.491287 2307 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 16 16:10:16.491361 kubelet[2307]: I0516 16:10:16.491314 2307 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 16:10:16.493132 kubelet[2307]: E0516 16:10:16.493107 2307 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 16:10:17.068057 kubelet[2307]: I0516 16:10:17.068012 2307 apiserver.go:52] "Watching apiserver" May 16 16:10:17.077630 kubelet[2307]: I0516 16:10:17.077599 2307 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 16:10:18.417325 systemd[1]: Reload requested from client PID 2587 ('systemctl') (unit session-9.scope)... May 16 16:10:18.417342 systemd[1]: Reloading... May 16 16:10:18.474906 zram_generator::config[2630]: No configuration found. May 16 16:10:18.619757 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 16:10:18.714342 systemd[1]: Reloading finished in 296 ms. May 16 16:10:18.748066 kubelet[2307]: I0516 16:10:18.748027 2307 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:10:18.748499 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:10:18.770133 systemd[1]: kubelet.service: Deactivated successfully. May 16 16:10:18.770324 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:10:18.770375 systemd[1]: kubelet.service: Consumed 911ms CPU time, 128.2M memory peak. May 16 16:10:18.772472 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 16:10:18.911428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 16:10:18.915143 (kubelet)[2672]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 16:10:18.952595 kubelet[2672]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:10:18.952595 kubelet[2672]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 16:10:18.952595 kubelet[2672]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 16:10:18.952949 kubelet[2672]: I0516 16:10:18.952648 2672 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 16:10:18.960582 kubelet[2672]: I0516 16:10:18.960491 2672 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 16:10:18.960582 kubelet[2672]: I0516 16:10:18.960517 2672 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 16:10:18.960873 kubelet[2672]: I0516 16:10:18.960809 2672 server.go:954] "Client rotation is on, will bootstrap in background" May 16 16:10:18.962200 kubelet[2672]: I0516 16:10:18.962182 2672 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 16:10:18.964694 kubelet[2672]: I0516 16:10:18.964502 2672 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 16:10:18.970922 kubelet[2672]: I0516 16:10:18.969898 2672 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 16:10:18.972628 kubelet[2672]: I0516 16:10:18.972597 2672 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 16:10:18.972897 kubelet[2672]: I0516 16:10:18.972852 2672 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 16:10:18.973039 kubelet[2672]: I0516 16:10:18.972891 2672 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 16:10:18.973119 kubelet[2672]: I0516 16:10:18.973048 2672 topology_manager.go:138] "Creating topology manager with none policy" May 16 16:10:18.973119 kubelet[2672]: I0516 16:10:18.973057 2672 container_manager_linux.go:304] "Creating device plugin manager" May 16 16:10:18.973119 kubelet[2672]: I0516 16:10:18.973098 2672 state_mem.go:36] "Initialized new in-memory state store" May 16 16:10:18.973247 kubelet[2672]: I0516 16:10:18.973232 2672 kubelet.go:446] "Attempting to sync node with API server" May 16 16:10:18.973277 kubelet[2672]: I0516 16:10:18.973252 2672 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 16:10:18.973277 kubelet[2672]: I0516 16:10:18.973271 2672 kubelet.go:352] "Adding apiserver pod source" May 16 16:10:18.973582 kubelet[2672]: I0516 16:10:18.973553 2672 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 16:10:18.975104 kubelet[2672]: I0516 16:10:18.975083 2672 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 16:10:18.975600 kubelet[2672]: I0516 16:10:18.975577 2672 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 16:10:18.976042 kubelet[2672]: I0516 16:10:18.976018 2672 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 16:10:18.976098 kubelet[2672]: I0516 16:10:18.976060 2672 server.go:1287] "Started kubelet" May 16 16:10:18.976858 kubelet[2672]: I0516 16:10:18.976803 2672 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 16:10:18.980903 kubelet[2672]: I0516 16:10:18.978053 2672 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 16:10:18.983088 kubelet[2672]: I0516 16:10:18.981498 2672 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 16:10:18.983088 kubelet[2672]: I0516 16:10:18.981570 2672 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 16:10:18.983088 kubelet[2672]: I0516 16:10:18.982389 2672 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 16:10:18.985556 kubelet[2672]: I0516 16:10:18.985531 2672 server.go:479] "Adding debug handlers to kubelet server" May 16 16:10:18.986553 kubelet[2672]: I0516 16:10:18.986518 2672 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 16:10:18.986725 kubelet[2672]: E0516 16:10:18.986702 2672 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 16:10:18.989890 kubelet[2672]: I0516 16:10:18.988618 2672 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 16:10:18.989890 kubelet[2672]: I0516 16:10:18.988721 2672 reconciler.go:26] "Reconciler: start to sync state" May 16 16:10:18.992135 kubelet[2672]: I0516 16:10:18.992102 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 16:10:18.993984 kubelet[2672]: I0516 16:10:18.993629 2672 factory.go:221] Registration of the systemd container factory successfully May 16 16:10:18.993984 kubelet[2672]: I0516 16:10:18.993719 2672 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 16:10:18.994200 kubelet[2672]: E0516 16:10:18.994169 2672 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 16:10:18.995028 kubelet[2672]: I0516 16:10:18.994999 2672 factory.go:221] Registration of the containerd container factory successfully May 16 16:10:18.997969 kubelet[2672]: I0516 16:10:18.997938 2672 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 16:10:18.997969 kubelet[2672]: I0516 16:10:18.997964 2672 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 16:10:18.998073 kubelet[2672]: I0516 16:10:18.997981 2672 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 16:10:18.998073 kubelet[2672]: I0516 16:10:18.997987 2672 kubelet.go:2382] "Starting kubelet main sync loop" May 16 16:10:18.998073 kubelet[2672]: E0516 16:10:18.998026 2672 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 16:10:19.033388 kubelet[2672]: I0516 16:10:19.033363 2672 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 16:10:19.033388 kubelet[2672]: I0516 16:10:19.033381 2672 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 16:10:19.033388 kubelet[2672]: I0516 16:10:19.033399 2672 state_mem.go:36] "Initialized new in-memory state store" May 16 16:10:19.033547 kubelet[2672]: I0516 16:10:19.033538 2672 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 16:10:19.033570 kubelet[2672]: I0516 16:10:19.033548 2672 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 16:10:19.033570 kubelet[2672]: I0516 16:10:19.033564 2672 policy_none.go:49] "None policy: Start" May 16 16:10:19.033608 kubelet[2672]: I0516 16:10:19.033571 2672 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 16:10:19.033608 kubelet[2672]: I0516 16:10:19.033580 2672 state_mem.go:35] "Initializing new in-memory state store" May 16 16:10:19.033678 kubelet[2672]: I0516 16:10:19.033666 2672 state_mem.go:75] "Updated machine memory state" May 16 16:10:19.037850 kubelet[2672]: I0516 16:10:19.037828 2672 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 16:10:19.038162 kubelet[2672]: I0516 16:10:19.038141 2672 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 16:10:19.038251 kubelet[2672]: I0516 16:10:19.038170 2672 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 16:10:19.038820 kubelet[2672]: I0516 16:10:19.038586 2672 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 16:10:19.039742 kubelet[2672]: E0516 16:10:19.039719 2672 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 16:10:19.099234 kubelet[2672]: I0516 16:10:19.099195 2672 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 16:10:19.099234 kubelet[2672]: I0516 16:10:19.099204 2672 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 16:10:19.099500 kubelet[2672]: I0516 16:10:19.099456 2672 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 16 16:10:19.142028 kubelet[2672]: I0516 16:10:19.141997 2672 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 16 16:10:19.167469 kubelet[2672]: I0516 16:10:19.167432 2672 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 16 16:10:19.167600 kubelet[2672]: I0516 16:10:19.167537 2672 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 16 16:10:19.189929 kubelet[2672]: I0516 16:10:19.189865 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:10:19.189929 kubelet[2672]: I0516 16:10:19.189931 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:10:19.190040 kubelet[2672]: I0516 16:10:19.189950 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:10:19.190040 kubelet[2672]: I0516 16:10:19.189966 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b0ad5a476987a221f8c6760552a0216-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3b0ad5a476987a221f8c6760552a0216\") " pod="kube-system/kube-apiserver-localhost" May 16 16:10:19.190040 kubelet[2672]: I0516 16:10:19.190003 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:10:19.190040 kubelet[2672]: I0516 16:10:19.190017 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 16 16:10:19.190040 kubelet[2672]: I0516 16:10:19.190032 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 16 16:10:19.190165 kubelet[2672]: I0516 16:10:19.190045 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b0ad5a476987a221f8c6760552a0216-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b0ad5a476987a221f8c6760552a0216\") " pod="kube-system/kube-apiserver-localhost" May 16 16:10:19.190165 kubelet[2672]: I0516 16:10:19.190074 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b0ad5a476987a221f8c6760552a0216-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3b0ad5a476987a221f8c6760552a0216\") " pod="kube-system/kube-apiserver-localhost" May 16 16:10:19.404148 kubelet[2672]: E0516 16:10:19.404121 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:19.404467 kubelet[2672]: E0516 16:10:19.404239 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:19.405771 kubelet[2672]: E0516 16:10:19.405739 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:19.974034 kubelet[2672]: I0516 16:10:19.974000 2672 apiserver.go:52] "Watching apiserver" May 16 16:10:19.989732 kubelet[2672]: I0516 16:10:19.989700 2672 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 16:10:20.016195 kubelet[2672]: E0516 16:10:20.016102 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:20.017099 kubelet[2672]: I0516 16:10:20.017070 2672 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 16 16:10:20.017329 kubelet[2672]: I0516 16:10:20.017292 2672 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 16 16:10:20.023043 kubelet[2672]: E0516 16:10:20.022904 2672 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 16:10:20.023143 kubelet[2672]: E0516 16:10:20.023086 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:20.023904 kubelet[2672]: E0516 16:10:20.023685 2672 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 16 16:10:20.024426 kubelet[2672]: E0516 16:10:20.024368 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:20.053276 kubelet[2672]: I0516 16:10:20.053175 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.053155308 podStartE2EDuration="1.053155308s" podCreationTimestamp="2025-05-16 16:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:10:20.0385853 +0000 UTC m=+1.120481698" watchObservedRunningTime="2025-05-16 16:10:20.053155308 +0000 UTC m=+1.135051706" May 16 16:10:20.062716 kubelet[2672]: I0516 16:10:20.062641 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.062615965 podStartE2EDuration="1.062615965s" podCreationTimestamp="2025-05-16 16:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:10:20.062401124 +0000 UTC m=+1.144297522" watchObservedRunningTime="2025-05-16 16:10:20.062615965 +0000 UTC m=+1.144512323" May 16 16:10:20.063006 kubelet[2672]: I0516 16:10:20.062919 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.062912327 podStartE2EDuration="1.062912327s" podCreationTimestamp="2025-05-16 16:10:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:10:20.053377469 +0000 UTC m=+1.135273867" watchObservedRunningTime="2025-05-16 16:10:20.062912327 +0000 UTC m=+1.144808725" May 16 16:10:21.017605 kubelet[2672]: E0516 16:10:21.017538 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:21.018120 kubelet[2672]: E0516 16:10:21.018103 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:24.210455 kubelet[2672]: I0516 16:10:24.207522 2672 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 16:10:24.210455 kubelet[2672]: I0516 16:10:24.208241 2672 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 16:10:24.215185 containerd[1526]: time="2025-05-16T16:10:24.208006123Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 16:10:25.163315 systemd[1]: Created slice kubepods-besteffort-podf1df5674_7ac0_4880_acc9_33765326f193.slice - libcontainer container kubepods-besteffort-podf1df5674_7ac0_4880_acc9_33765326f193.slice. May 16 16:10:25.229064 systemd[1]: Created slice kubepods-besteffort-podf4d025dc_af2c_454b_b4b1_afb58c135f1b.slice - libcontainer container kubepods-besteffort-podf4d025dc_af2c_454b_b4b1_afb58c135f1b.slice. May 16 16:10:25.231892 kubelet[2672]: I0516 16:10:25.231776 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1df5674-7ac0-4880-acc9-33765326f193-xtables-lock\") pod \"kube-proxy-t5nz4\" (UID: \"f1df5674-7ac0-4880-acc9-33765326f193\") " pod="kube-system/kube-proxy-t5nz4" May 16 16:10:25.231892 kubelet[2672]: I0516 16:10:25.231816 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1df5674-7ac0-4880-acc9-33765326f193-kube-proxy\") pod \"kube-proxy-t5nz4\" (UID: \"f1df5674-7ac0-4880-acc9-33765326f193\") " pod="kube-system/kube-proxy-t5nz4" May 16 16:10:25.231892 kubelet[2672]: I0516 16:10:25.231838 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zsz9\" (UniqueName: \"kubernetes.io/projected/f4d025dc-af2c-454b-b4b1-afb58c135f1b-kube-api-access-4zsz9\") pod \"tigera-operator-844669ff44-vbvj8\" (UID: \"f4d025dc-af2c-454b-b4b1-afb58c135f1b\") " pod="tigera-operator/tigera-operator-844669ff44-vbvj8" May 16 16:10:25.231892 kubelet[2672]: I0516 16:10:25.231858 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1df5674-7ac0-4880-acc9-33765326f193-lib-modules\") pod \"kube-proxy-t5nz4\" (UID: \"f1df5674-7ac0-4880-acc9-33765326f193\") " pod="kube-system/kube-proxy-t5nz4" May 16 16:10:25.232937 kubelet[2672]: I0516 16:10:25.232677 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rdfv\" (UniqueName: \"kubernetes.io/projected/f1df5674-7ac0-4880-acc9-33765326f193-kube-api-access-4rdfv\") pod \"kube-proxy-t5nz4\" (UID: \"f1df5674-7ac0-4880-acc9-33765326f193\") " pod="kube-system/kube-proxy-t5nz4" May 16 16:10:25.232968 kubelet[2672]: I0516 16:10:25.232943 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f4d025dc-af2c-454b-b4b1-afb58c135f1b-var-lib-calico\") pod \"tigera-operator-844669ff44-vbvj8\" (UID: \"f4d025dc-af2c-454b-b4b1-afb58c135f1b\") " pod="tigera-operator/tigera-operator-844669ff44-vbvj8" May 16 16:10:25.473071 kubelet[2672]: E0516 16:10:25.472963 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:25.473896 containerd[1526]: time="2025-05-16T16:10:25.473615984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5nz4,Uid:f1df5674-7ac0-4880-acc9-33765326f193,Namespace:kube-system,Attempt:0,}" May 16 16:10:25.503375 containerd[1526]: time="2025-05-16T16:10:25.503326274Z" level=info msg="connecting to shim 0a44919757c0f768ca0de0e044227976f89ebe593cd56b7455e98add4ddd6c2e" address="unix:///run/containerd/s/d5ea8a2357c4e7b994b2e34503e6cef6894d9f12eee1c0f6e7583ee8f5758b82" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:25.531293 kubelet[2672]: E0516 16:10:25.531261 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:25.533439 systemd[1]: Started cri-containerd-0a44919757c0f768ca0de0e044227976f89ebe593cd56b7455e98add4ddd6c2e.scope - libcontainer container 0a44919757c0f768ca0de0e044227976f89ebe593cd56b7455e98add4ddd6c2e. May 16 16:10:25.534295 containerd[1526]: time="2025-05-16T16:10:25.534228770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-vbvj8,Uid:f4d025dc-af2c-454b-b4b1-afb58c135f1b,Namespace:tigera-operator,Attempt:0,}" May 16 16:10:25.569108 containerd[1526]: time="2025-05-16T16:10:25.569066122Z" level=info msg="connecting to shim fea03f668057f0fe7827ab270a6819d646cb8619720e6db04e6738feda101564" address="unix:///run/containerd/s/95f254947081eb22d290f6746b7ae52a179c80ddb25c21dc0ad1bb6134f1f68a" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:25.581928 containerd[1526]: time="2025-05-16T16:10:25.581537857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5nz4,Uid:f1df5674-7ac0-4880-acc9-33765326f193,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a44919757c0f768ca0de0e044227976f89ebe593cd56b7455e98add4ddd6c2e\"" May 16 16:10:25.582405 kubelet[2672]: E0516 16:10:25.582381 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:25.586139 containerd[1526]: time="2025-05-16T16:10:25.586094117Z" level=info msg="CreateContainer within sandbox \"0a44919757c0f768ca0de0e044227976f89ebe593cd56b7455e98add4ddd6c2e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 16:10:25.603340 containerd[1526]: time="2025-05-16T16:10:25.600900262Z" level=info msg="Container 79d09f097a141f7809ca9cce02465f669202bce95d83455a23bf0f5a99c56599: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:25.606043 systemd[1]: Started cri-containerd-fea03f668057f0fe7827ab270a6819d646cb8619720e6db04e6738feda101564.scope - libcontainer container fea03f668057f0fe7827ab270a6819d646cb8619720e6db04e6738feda101564. May 16 16:10:25.609144 containerd[1526]: time="2025-05-16T16:10:25.609092698Z" level=info msg="CreateContainer within sandbox \"0a44919757c0f768ca0de0e044227976f89ebe593cd56b7455e98add4ddd6c2e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79d09f097a141f7809ca9cce02465f669202bce95d83455a23bf0f5a99c56599\"" May 16 16:10:25.609900 containerd[1526]: time="2025-05-16T16:10:25.609758421Z" level=info msg="StartContainer for \"79d09f097a141f7809ca9cce02465f669202bce95d83455a23bf0f5a99c56599\"" May 16 16:10:25.611766 containerd[1526]: time="2025-05-16T16:10:25.611714029Z" level=info msg="connecting to shim 79d09f097a141f7809ca9cce02465f669202bce95d83455a23bf0f5a99c56599" address="unix:///run/containerd/s/d5ea8a2357c4e7b994b2e34503e6cef6894d9f12eee1c0f6e7583ee8f5758b82" protocol=ttrpc version=3 May 16 16:10:25.631047 systemd[1]: Started cri-containerd-79d09f097a141f7809ca9cce02465f669202bce95d83455a23bf0f5a99c56599.scope - libcontainer container 79d09f097a141f7809ca9cce02465f669202bce95d83455a23bf0f5a99c56599. May 16 16:10:25.643110 containerd[1526]: time="2025-05-16T16:10:25.643068367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-vbvj8,Uid:f4d025dc-af2c-454b-b4b1-afb58c135f1b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"fea03f668057f0fe7827ab270a6819d646cb8619720e6db04e6738feda101564\"" May 16 16:10:25.645475 containerd[1526]: time="2025-05-16T16:10:25.645351977Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 16 16:10:25.672392 containerd[1526]: time="2025-05-16T16:10:25.672348175Z" level=info msg="StartContainer for \"79d09f097a141f7809ca9cce02465f669202bce95d83455a23bf0f5a99c56599\" returns successfully" May 16 16:10:25.916901 kubelet[2672]: E0516 16:10:25.914378 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:26.001244 kubelet[2672]: E0516 16:10:26.001206 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:26.027369 kubelet[2672]: E0516 16:10:26.027094 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:26.027369 kubelet[2672]: E0516 16:10:26.027099 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:26.027369 kubelet[2672]: E0516 16:10:26.027314 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:26.028142 kubelet[2672]: E0516 16:10:26.027683 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:26.047433 kubelet[2672]: I0516 16:10:26.047378 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t5nz4" podStartSLOduration=1.047354207 podStartE2EDuration="1.047354207s" podCreationTimestamp="2025-05-16 16:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:10:26.037534926 +0000 UTC m=+7.119431364" watchObservedRunningTime="2025-05-16 16:10:26.047354207 +0000 UTC m=+7.129250565" May 16 16:10:27.470938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341360186.mount: Deactivated successfully. May 16 16:10:28.526953 containerd[1526]: time="2025-05-16T16:10:28.526901641Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:28.528170 containerd[1526]: time="2025-05-16T16:10:28.528136685Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 16 16:10:28.528974 containerd[1526]: time="2025-05-16T16:10:28.528939928Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:28.531322 containerd[1526]: time="2025-05-16T16:10:28.531289017Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:28.532596 containerd[1526]: time="2025-05-16T16:10:28.532564421Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 2.887082764s" May 16 16:10:28.532638 containerd[1526]: time="2025-05-16T16:10:28.532597661Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 16 16:10:28.535250 containerd[1526]: time="2025-05-16T16:10:28.535219471Z" level=info msg="CreateContainer within sandbox \"fea03f668057f0fe7827ab270a6819d646cb8619720e6db04e6738feda101564\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 16:10:28.540909 containerd[1526]: time="2025-05-16T16:10:28.540327729Z" level=info msg="Container d6c5540042f66c4c90d99886bf11e9d53661480a487e93b821a584b905ff1a33: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:28.547218 containerd[1526]: time="2025-05-16T16:10:28.547183194Z" level=info msg="CreateContainer within sandbox \"fea03f668057f0fe7827ab270a6819d646cb8619720e6db04e6738feda101564\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d6c5540042f66c4c90d99886bf11e9d53661480a487e93b821a584b905ff1a33\"" May 16 16:10:28.547607 containerd[1526]: time="2025-05-16T16:10:28.547562195Z" level=info msg="StartContainer for \"d6c5540042f66c4c90d99886bf11e9d53661480a487e93b821a584b905ff1a33\"" May 16 16:10:28.548465 containerd[1526]: time="2025-05-16T16:10:28.548431079Z" level=info msg="connecting to shim d6c5540042f66c4c90d99886bf11e9d53661480a487e93b821a584b905ff1a33" address="unix:///run/containerd/s/95f254947081eb22d290f6746b7ae52a179c80ddb25c21dc0ad1bb6134f1f68a" protocol=ttrpc version=3 May 16 16:10:28.578160 systemd[1]: Started cri-containerd-d6c5540042f66c4c90d99886bf11e9d53661480a487e93b821a584b905ff1a33.scope - libcontainer container d6c5540042f66c4c90d99886bf11e9d53661480a487e93b821a584b905ff1a33. May 16 16:10:28.607366 containerd[1526]: time="2025-05-16T16:10:28.607329971Z" level=info msg="StartContainer for \"d6c5540042f66c4c90d99886bf11e9d53661480a487e93b821a584b905ff1a33\" returns successfully" May 16 16:10:28.924944 update_engine[1510]: I20250516 16:10:28.924203 1510 update_attempter.cc:509] Updating boot flags... May 16 16:10:29.058851 kubelet[2672]: I0516 16:10:29.058772 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-vbvj8" podStartSLOduration=1.169726098 podStartE2EDuration="4.058755109s" podCreationTimestamp="2025-05-16 16:10:25 +0000 UTC" firstStartedPulling="2025-05-16 16:10:25.644142772 +0000 UTC m=+6.726039170" lastFinishedPulling="2025-05-16 16:10:28.533171783 +0000 UTC m=+9.615068181" observedRunningTime="2025-05-16 16:10:29.057548185 +0000 UTC m=+10.139444583" watchObservedRunningTime="2025-05-16 16:10:29.058755109 +0000 UTC m=+10.140651507" May 16 16:10:33.874727 sudo[1762]: pam_unix(sudo:session): session closed for user root May 16 16:10:33.880511 sshd[1761]: Connection closed by 10.0.0.1 port 59586 May 16 16:10:33.881106 sshd-session[1759]: pam_unix(sshd:session): session closed for user core May 16 16:10:33.885122 systemd[1]: sshd@8-10.0.0.50:22-10.0.0.1:59586.service: Deactivated successfully. May 16 16:10:33.887483 systemd[1]: session-9.scope: Deactivated successfully. May 16 16:10:33.888966 systemd[1]: session-9.scope: Consumed 7.356s CPU time, 229.6M memory peak. May 16 16:10:33.890131 systemd-logind[1508]: Session 9 logged out. Waiting for processes to exit. May 16 16:10:33.891763 systemd-logind[1508]: Removed session 9. May 16 16:10:38.179010 systemd[1]: Created slice kubepods-besteffort-pod9c688ea5_be65_4b46_965e_ef806168f658.slice - libcontainer container kubepods-besteffort-pod9c688ea5_be65_4b46_965e_ef806168f658.slice. May 16 16:10:38.234018 kubelet[2672]: I0516 16:10:38.233979 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9c688ea5-be65-4b46-965e-ef806168f658-typha-certs\") pod \"calico-typha-b8fb88cd-2kszj\" (UID: \"9c688ea5-be65-4b46-965e-ef806168f658\") " pod="calico-system/calico-typha-b8fb88cd-2kszj" May 16 16:10:38.234018 kubelet[2672]: I0516 16:10:38.234023 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c688ea5-be65-4b46-965e-ef806168f658-tigera-ca-bundle\") pod \"calico-typha-b8fb88cd-2kszj\" (UID: \"9c688ea5-be65-4b46-965e-ef806168f658\") " pod="calico-system/calico-typha-b8fb88cd-2kszj" May 16 16:10:38.234443 kubelet[2672]: I0516 16:10:38.234043 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94lll\" (UniqueName: \"kubernetes.io/projected/9c688ea5-be65-4b46-965e-ef806168f658-kube-api-access-94lll\") pod \"calico-typha-b8fb88cd-2kszj\" (UID: \"9c688ea5-be65-4b46-965e-ef806168f658\") " pod="calico-system/calico-typha-b8fb88cd-2kszj" May 16 16:10:38.394656 systemd[1]: Created slice kubepods-besteffort-poda89b585c_5f2d_4026_988b_dd41b1ea5522.slice - libcontainer container kubepods-besteffort-poda89b585c_5f2d_4026_988b_dd41b1ea5522.slice. May 16 16:10:38.436108 kubelet[2672]: I0516 16:10:38.435998 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a89b585c-5f2d-4026-988b-dd41b1ea5522-xtables-lock\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.436108 kubelet[2672]: I0516 16:10:38.436040 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbq46\" (UniqueName: \"kubernetes.io/projected/a89b585c-5f2d-4026-988b-dd41b1ea5522-kube-api-access-mbq46\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.436108 kubelet[2672]: I0516 16:10:38.436058 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a89b585c-5f2d-4026-988b-dd41b1ea5522-lib-modules\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.436108 kubelet[2672]: I0516 16:10:38.436075 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a89b585c-5f2d-4026-988b-dd41b1ea5522-cni-bin-dir\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.442616 kubelet[2672]: I0516 16:10:38.436098 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a89b585c-5f2d-4026-988b-dd41b1ea5522-var-run-calico\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.442676 kubelet[2672]: I0516 16:10:38.442650 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a89b585c-5f2d-4026-988b-dd41b1ea5522-node-certs\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.442676 kubelet[2672]: I0516 16:10:38.442673 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a89b585c-5f2d-4026-988b-dd41b1ea5522-cni-net-dir\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.442730 kubelet[2672]: I0516 16:10:38.442691 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a89b585c-5f2d-4026-988b-dd41b1ea5522-cni-log-dir\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.442730 kubelet[2672]: I0516 16:10:38.442709 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a89b585c-5f2d-4026-988b-dd41b1ea5522-policysync\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.442730 kubelet[2672]: I0516 16:10:38.442724 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a89b585c-5f2d-4026-988b-dd41b1ea5522-tigera-ca-bundle\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.442793 kubelet[2672]: I0516 16:10:38.442744 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a89b585c-5f2d-4026-988b-dd41b1ea5522-flexvol-driver-host\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.442793 kubelet[2672]: I0516 16:10:38.442761 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a89b585c-5f2d-4026-988b-dd41b1ea5522-var-lib-calico\") pod \"calico-node-rvsn9\" (UID: \"a89b585c-5f2d-4026-988b-dd41b1ea5522\") " pod="calico-system/calico-node-rvsn9" May 16 16:10:38.484432 kubelet[2672]: E0516 16:10:38.484314 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:38.484955 containerd[1526]: time="2025-05-16T16:10:38.484924147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b8fb88cd-2kszj,Uid:9c688ea5-be65-4b46-965e-ef806168f658,Namespace:calico-system,Attempt:0,}" May 16 16:10:38.556335 kubelet[2672]: E0516 16:10:38.555691 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.556335 kubelet[2672]: W0516 16:10:38.555715 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.561438 kubelet[2672]: E0516 16:10:38.561375 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.561944 kubelet[2672]: W0516 16:10:38.561656 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.565375 containerd[1526]: time="2025-05-16T16:10:38.558800727Z" level=info msg="connecting to shim 60047725c6fa0a65d7f0a3d249612fad5077c9f6770b4160050f8e608944715d" address="unix:///run/containerd/s/2ee6be20c0f58ff47384e9a2db3b55ffcdfc5a79c7aef8790b3cc40013599062" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:38.568628 kubelet[2672]: E0516 16:10:38.568597 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.568950 kubelet[2672]: E0516 16:10:38.568918 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.586781 kubelet[2672]: E0516 16:10:38.586727 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlzmx" podUID="d9789950-a309-4163-96c5-d67e446c252b" May 16 16:10:38.622926 kubelet[2672]: E0516 16:10:38.622897 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.622926 kubelet[2672]: W0516 16:10:38.622918 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.623083 kubelet[2672]: E0516 16:10:38.622939 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.623108 kubelet[2672]: E0516 16:10:38.623084 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.627167 systemd[1]: Started cri-containerd-60047725c6fa0a65d7f0a3d249612fad5077c9f6770b4160050f8e608944715d.scope - libcontainer container 60047725c6fa0a65d7f0a3d249612fad5077c9f6770b4160050f8e608944715d. May 16 16:10:38.644779 kubelet[2672]: W0516 16:10:38.623092 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.644966 kubelet[2672]: E0516 16:10:38.644785 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.645053 kubelet[2672]: E0516 16:10:38.645039 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.645084 kubelet[2672]: W0516 16:10:38.645053 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.645084 kubelet[2672]: E0516 16:10:38.645064 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.645211 kubelet[2672]: E0516 16:10:38.645200 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.645211 kubelet[2672]: W0516 16:10:38.645211 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.645272 kubelet[2672]: E0516 16:10:38.645218 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.645402 kubelet[2672]: E0516 16:10:38.645387 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.645402 kubelet[2672]: W0516 16:10:38.645398 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.645462 kubelet[2672]: E0516 16:10:38.645406 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.645538 kubelet[2672]: E0516 16:10:38.645528 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.645538 kubelet[2672]: W0516 16:10:38.645537 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.645596 kubelet[2672]: E0516 16:10:38.645546 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.645667 kubelet[2672]: E0516 16:10:38.645657 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.645667 kubelet[2672]: W0516 16:10:38.645667 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.645721 kubelet[2672]: E0516 16:10:38.645674 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.645798 kubelet[2672]: E0516 16:10:38.645789 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.645798 kubelet[2672]: W0516 16:10:38.645798 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.645874 kubelet[2672]: E0516 16:10:38.645805 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.645963 kubelet[2672]: E0516 16:10:38.645952 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.645963 kubelet[2672]: W0516 16:10:38.645962 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.646018 kubelet[2672]: E0516 16:10:38.645970 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.646093 kubelet[2672]: E0516 16:10:38.646084 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.646093 kubelet[2672]: W0516 16:10:38.646093 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.646147 kubelet[2672]: E0516 16:10:38.646100 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.646216 kubelet[2672]: E0516 16:10:38.646208 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.646216 kubelet[2672]: W0516 16:10:38.646216 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.646267 kubelet[2672]: E0516 16:10:38.646223 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.646354 kubelet[2672]: E0516 16:10:38.646334 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.646354 kubelet[2672]: W0516 16:10:38.646351 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.646429 kubelet[2672]: E0516 16:10:38.646362 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.646512 kubelet[2672]: E0516 16:10:38.646501 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.646512 kubelet[2672]: W0516 16:10:38.646511 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.646562 kubelet[2672]: E0516 16:10:38.646518 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.646639 kubelet[2672]: E0516 16:10:38.646630 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.646687 kubelet[2672]: W0516 16:10:38.646672 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.646719 kubelet[2672]: E0516 16:10:38.646688 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.646869 kubelet[2672]: E0516 16:10:38.646858 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.646869 kubelet[2672]: W0516 16:10:38.646869 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.646944 kubelet[2672]: E0516 16:10:38.646905 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.647078 kubelet[2672]: E0516 16:10:38.647065 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.647078 kubelet[2672]: W0516 16:10:38.647077 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.647140 kubelet[2672]: E0516 16:10:38.647087 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.647259 kubelet[2672]: E0516 16:10:38.647247 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.647259 kubelet[2672]: W0516 16:10:38.647258 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.647311 kubelet[2672]: E0516 16:10:38.647265 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.647424 kubelet[2672]: E0516 16:10:38.647414 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.647451 kubelet[2672]: W0516 16:10:38.647425 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.647451 kubelet[2672]: E0516 16:10:38.647434 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.647578 kubelet[2672]: E0516 16:10:38.647567 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.647578 kubelet[2672]: W0516 16:10:38.647578 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.647633 kubelet[2672]: E0516 16:10:38.647586 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.647732 kubelet[2672]: E0516 16:10:38.647722 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.647766 kubelet[2672]: W0516 16:10:38.647732 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.647766 kubelet[2672]: E0516 16:10:38.647741 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.648076 kubelet[2672]: E0516 16:10:38.648063 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.648123 kubelet[2672]: W0516 16:10:38.648077 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.648123 kubelet[2672]: E0516 16:10:38.648087 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.648123 kubelet[2672]: I0516 16:10:38.648110 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d9789950-a309-4163-96c5-d67e446c252b-varrun\") pod \"csi-node-driver-vlzmx\" (UID: \"d9789950-a309-4163-96c5-d67e446c252b\") " pod="calico-system/csi-node-driver-vlzmx" May 16 16:10:38.648294 kubelet[2672]: E0516 16:10:38.648279 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.648334 kubelet[2672]: W0516 16:10:38.648304 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.648334 kubelet[2672]: E0516 16:10:38.648320 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.648389 kubelet[2672]: I0516 16:10:38.648338 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22nnw\" (UniqueName: \"kubernetes.io/projected/d9789950-a309-4163-96c5-d67e446c252b-kube-api-access-22nnw\") pod \"csi-node-driver-vlzmx\" (UID: \"d9789950-a309-4163-96c5-d67e446c252b\") " pod="calico-system/csi-node-driver-vlzmx" May 16 16:10:38.648576 kubelet[2672]: E0516 16:10:38.648535 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.648617 kubelet[2672]: W0516 16:10:38.648576 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.648617 kubelet[2672]: E0516 16:10:38.648592 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.648617 kubelet[2672]: I0516 16:10:38.648607 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d9789950-a309-4163-96c5-d67e446c252b-registration-dir\") pod \"csi-node-driver-vlzmx\" (UID: \"d9789950-a309-4163-96c5-d67e446c252b\") " pod="calico-system/csi-node-driver-vlzmx" May 16 16:10:38.648820 kubelet[2672]: E0516 16:10:38.648806 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.648820 kubelet[2672]: W0516 16:10:38.648819 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.649079 kubelet[2672]: E0516 16:10:38.648833 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.649079 kubelet[2672]: I0516 16:10:38.648848 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d9789950-a309-4163-96c5-d67e446c252b-socket-dir\") pod \"csi-node-driver-vlzmx\" (UID: \"d9789950-a309-4163-96c5-d67e446c252b\") " pod="calico-system/csi-node-driver-vlzmx" May 16 16:10:38.649168 kubelet[2672]: E0516 16:10:38.649146 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.649168 kubelet[2672]: W0516 16:10:38.649160 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.649223 kubelet[2672]: E0516 16:10:38.649175 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.649223 kubelet[2672]: I0516 16:10:38.649194 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9789950-a309-4163-96c5-d67e446c252b-kubelet-dir\") pod \"csi-node-driver-vlzmx\" (UID: \"d9789950-a309-4163-96c5-d67e446c252b\") " pod="calico-system/csi-node-driver-vlzmx" May 16 16:10:38.649359 kubelet[2672]: E0516 16:10:38.649331 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.649359 kubelet[2672]: W0516 16:10:38.649349 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.649359 kubelet[2672]: E0516 16:10:38.649363 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.649965 kubelet[2672]: E0516 16:10:38.649477 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.649965 kubelet[2672]: W0516 16:10:38.649483 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.649965 kubelet[2672]: E0516 16:10:38.649525 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.649965 kubelet[2672]: E0516 16:10:38.649961 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.649965 kubelet[2672]: W0516 16:10:38.649969 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.650073 kubelet[2672]: E0516 16:10:38.650007 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.650196 kubelet[2672]: E0516 16:10:38.650181 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.650196 kubelet[2672]: W0516 16:10:38.650191 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.650196 kubelet[2672]: E0516 16:10:38.650224 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.650325 kubelet[2672]: E0516 16:10:38.650312 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.650325 kubelet[2672]: W0516 16:10:38.650320 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.650325 kubelet[2672]: E0516 16:10:38.650353 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.650441 kubelet[2672]: E0516 16:10:38.650434 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.650441 kubelet[2672]: W0516 16:10:38.650440 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.650530 kubelet[2672]: E0516 16:10:38.650465 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.650553 kubelet[2672]: E0516 16:10:38.650545 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.650553 kubelet[2672]: W0516 16:10:38.650551 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.650618 kubelet[2672]: E0516 16:10:38.650560 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.650699 kubelet[2672]: E0516 16:10:38.650686 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.650963 kubelet[2672]: W0516 16:10:38.650941 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.650963 kubelet[2672]: E0516 16:10:38.650964 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.651206 kubelet[2672]: E0516 16:10:38.651192 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.651206 kubelet[2672]: W0516 16:10:38.651205 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.651372 kubelet[2672]: E0516 16:10:38.651354 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.651992 kubelet[2672]: E0516 16:10:38.651973 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.651992 kubelet[2672]: W0516 16:10:38.651990 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.651992 kubelet[2672]: E0516 16:10:38.652002 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.674265 containerd[1526]: time="2025-05-16T16:10:38.674143305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b8fb88cd-2kszj,Uid:9c688ea5-be65-4b46-965e-ef806168f658,Namespace:calico-system,Attempt:0,} returns sandbox id \"60047725c6fa0a65d7f0a3d249612fad5077c9f6770b4160050f8e608944715d\"" May 16 16:10:38.678010 kubelet[2672]: E0516 16:10:38.677985 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:38.693480 containerd[1526]: time="2025-05-16T16:10:38.693359222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 16 16:10:38.700170 containerd[1526]: time="2025-05-16T16:10:38.700043315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rvsn9,Uid:a89b585c-5f2d-4026-988b-dd41b1ea5522,Namespace:calico-system,Attempt:0,}" May 16 16:10:38.745909 containerd[1526]: time="2025-05-16T16:10:38.745831001Z" level=info msg="connecting to shim f5a84c175604c0793723ed77e672664cc56fb4e8f60e199a878c17abf3252fa5" address="unix:///run/containerd/s/4a51d3eb4d550b775ebe6ab21fcf471ed641fdb60e7880503c5482b0c94681bc" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:38.751555 kubelet[2672]: E0516 16:10:38.750137 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.751555 kubelet[2672]: W0516 16:10:38.750157 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.751555 kubelet[2672]: E0516 16:10:38.750175 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.751555 kubelet[2672]: E0516 16:10:38.750345 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.751555 kubelet[2672]: W0516 16:10:38.750354 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.751555 kubelet[2672]: E0516 16:10:38.750367 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.751555 kubelet[2672]: E0516 16:10:38.750509 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.751555 kubelet[2672]: W0516 16:10:38.750516 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.751555 kubelet[2672]: E0516 16:10:38.750533 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.751555 kubelet[2672]: E0516 16:10:38.750783 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.751827 kubelet[2672]: W0516 16:10:38.750792 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.751827 kubelet[2672]: E0516 16:10:38.750806 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.751827 kubelet[2672]: E0516 16:10:38.750993 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.751827 kubelet[2672]: W0516 16:10:38.751000 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.751827 kubelet[2672]: E0516 16:10:38.751015 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.751827 kubelet[2672]: E0516 16:10:38.751225 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.751827 kubelet[2672]: W0516 16:10:38.751240 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.751827 kubelet[2672]: E0516 16:10:38.751261 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.751827 kubelet[2672]: E0516 16:10:38.751418 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.751827 kubelet[2672]: W0516 16:10:38.751426 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.752032 kubelet[2672]: E0516 16:10:38.751450 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.752032 kubelet[2672]: E0516 16:10:38.751581 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.752032 kubelet[2672]: W0516 16:10:38.751588 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.752032 kubelet[2672]: E0516 16:10:38.751633 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.752032 kubelet[2672]: E0516 16:10:38.751716 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.752032 kubelet[2672]: W0516 16:10:38.751724 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.752032 kubelet[2672]: E0516 16:10:38.751749 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.752032 kubelet[2672]: E0516 16:10:38.751866 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.752032 kubelet[2672]: W0516 16:10:38.751874 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.752032 kubelet[2672]: E0516 16:10:38.751907 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.752992 kubelet[2672]: E0516 16:10:38.752963 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.752992 kubelet[2672]: W0516 16:10:38.752982 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.753085 kubelet[2672]: E0516 16:10:38.753012 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.753200 kubelet[2672]: E0516 16:10:38.753184 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.753200 kubelet[2672]: W0516 16:10:38.753198 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.753251 kubelet[2672]: E0516 16:10:38.753228 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.753576 kubelet[2672]: E0516 16:10:38.753555 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.753576 kubelet[2672]: W0516 16:10:38.753573 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.753737 kubelet[2672]: E0516 16:10:38.753601 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.753925 kubelet[2672]: E0516 16:10:38.753778 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.753925 kubelet[2672]: W0516 16:10:38.753789 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.753925 kubelet[2672]: E0516 16:10:38.753860 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.754421 kubelet[2672]: E0516 16:10:38.754395 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.754421 kubelet[2672]: W0516 16:10:38.754411 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.754497 kubelet[2672]: E0516 16:10:38.754476 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.754602 kubelet[2672]: E0516 16:10:38.754586 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.754602 kubelet[2672]: W0516 16:10:38.754597 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.754651 kubelet[2672]: E0516 16:10:38.754632 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.754764 kubelet[2672]: E0516 16:10:38.754746 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.754764 kubelet[2672]: W0516 16:10:38.754756 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.754827 kubelet[2672]: E0516 16:10:38.754787 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.755368 kubelet[2672]: E0516 16:10:38.754897 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.755368 kubelet[2672]: W0516 16:10:38.754908 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.755368 kubelet[2672]: E0516 16:10:38.754920 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.755368 kubelet[2672]: E0516 16:10:38.755203 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.755368 kubelet[2672]: W0516 16:10:38.755215 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.755512 kubelet[2672]: E0516 16:10:38.755237 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.756356 kubelet[2672]: E0516 16:10:38.756320 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.756413 kubelet[2672]: W0516 16:10:38.756383 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.756413 kubelet[2672]: E0516 16:10:38.756403 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.756674 kubelet[2672]: E0516 16:10:38.756624 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.756674 kubelet[2672]: W0516 16:10:38.756637 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.756674 kubelet[2672]: E0516 16:10:38.756653 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.757032 kubelet[2672]: E0516 16:10:38.757009 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.757032 kubelet[2672]: W0516 16:10:38.757024 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.757152 kubelet[2672]: E0516 16:10:38.757135 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.757324 kubelet[2672]: E0516 16:10:38.757303 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.757324 kubelet[2672]: W0516 16:10:38.757316 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.757418 kubelet[2672]: E0516 16:10:38.757332 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.757875 kubelet[2672]: E0516 16:10:38.757853 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.757875 kubelet[2672]: W0516 16:10:38.757869 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.757949 kubelet[2672]: E0516 16:10:38.757925 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.758226 kubelet[2672]: E0516 16:10:38.758209 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.758226 kubelet[2672]: W0516 16:10:38.758223 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.758292 kubelet[2672]: E0516 16:10:38.758233 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.767983 kubelet[2672]: E0516 16:10:38.767956 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:38.767983 kubelet[2672]: W0516 16:10:38.767973 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:38.767983 kubelet[2672]: E0516 16:10:38.767987 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:38.782043 systemd[1]: Started cri-containerd-f5a84c175604c0793723ed77e672664cc56fb4e8f60e199a878c17abf3252fa5.scope - libcontainer container f5a84c175604c0793723ed77e672664cc56fb4e8f60e199a878c17abf3252fa5. May 16 16:10:38.827598 containerd[1526]: time="2025-05-16T16:10:38.827552876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-rvsn9,Uid:a89b585c-5f2d-4026-988b-dd41b1ea5522,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5a84c175604c0793723ed77e672664cc56fb4e8f60e199a878c17abf3252fa5\"" May 16 16:10:39.998410 kubelet[2672]: E0516 16:10:39.998359 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlzmx" podUID="d9789950-a309-4163-96c5-d67e446c252b" May 16 16:10:40.679077 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3375126192.mount: Deactivated successfully. May 16 16:10:41.093832 containerd[1526]: time="2025-05-16T16:10:41.093791391Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:41.094344 containerd[1526]: time="2025-05-16T16:10:41.094307792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 16 16:10:41.095013 containerd[1526]: time="2025-05-16T16:10:41.094988473Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:41.096782 containerd[1526]: time="2025-05-16T16:10:41.096757756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:41.097484 containerd[1526]: time="2025-05-16T16:10:41.097368517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 2.403971735s" May 16 16:10:41.097484 containerd[1526]: time="2025-05-16T16:10:41.097401637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 16 16:10:41.100564 containerd[1526]: time="2025-05-16T16:10:41.100534882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 16 16:10:41.112655 containerd[1526]: time="2025-05-16T16:10:41.112615861Z" level=info msg="CreateContainer within sandbox \"60047725c6fa0a65d7f0a3d249612fad5077c9f6770b4160050f8e608944715d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 16 16:10:41.120657 containerd[1526]: time="2025-05-16T16:10:41.120612673Z" level=info msg="Container 11bb31a2a85b40542f19ba3fd9b2bed3844fe05539e2874bb5e5e1c8f3f84303: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:41.126715 containerd[1526]: time="2025-05-16T16:10:41.126661643Z" level=info msg="CreateContainer within sandbox \"60047725c6fa0a65d7f0a3d249612fad5077c9f6770b4160050f8e608944715d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"11bb31a2a85b40542f19ba3fd9b2bed3844fe05539e2874bb5e5e1c8f3f84303\"" May 16 16:10:41.131240 containerd[1526]: time="2025-05-16T16:10:41.131211490Z" level=info msg="StartContainer for \"11bb31a2a85b40542f19ba3fd9b2bed3844fe05539e2874bb5e5e1c8f3f84303\"" May 16 16:10:41.132611 containerd[1526]: time="2025-05-16T16:10:41.132572772Z" level=info msg="connecting to shim 11bb31a2a85b40542f19ba3fd9b2bed3844fe05539e2874bb5e5e1c8f3f84303" address="unix:///run/containerd/s/2ee6be20c0f58ff47384e9a2db3b55ffcdfc5a79c7aef8790b3cc40013599062" protocol=ttrpc version=3 May 16 16:10:41.152039 systemd[1]: Started cri-containerd-11bb31a2a85b40542f19ba3fd9b2bed3844fe05539e2874bb5e5e1c8f3f84303.scope - libcontainer container 11bb31a2a85b40542f19ba3fd9b2bed3844fe05539e2874bb5e5e1c8f3f84303. May 16 16:10:41.195543 containerd[1526]: time="2025-05-16T16:10:41.194743749Z" level=info msg="StartContainer for \"11bb31a2a85b40542f19ba3fd9b2bed3844fe05539e2874bb5e5e1c8f3f84303\" returns successfully" May 16 16:10:41.999232 kubelet[2672]: E0516 16:10:41.999185 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlzmx" podUID="d9789950-a309-4163-96c5-d67e446c252b" May 16 16:10:42.069016 kubelet[2672]: E0516 16:10:42.068990 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:42.070551 kubelet[2672]: E0516 16:10:42.070524 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.070713 kubelet[2672]: W0516 16:10:42.070688 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.075035 kubelet[2672]: E0516 16:10:42.074998 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.075571 kubelet[2672]: E0516 16:10:42.075244 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.075571 kubelet[2672]: W0516 16:10:42.075259 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.075571 kubelet[2672]: E0516 16:10:42.075305 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.075571 kubelet[2672]: E0516 16:10:42.075468 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.075571 kubelet[2672]: W0516 16:10:42.075479 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.075571 kubelet[2672]: E0516 16:10:42.075487 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.075934 kubelet[2672]: E0516 16:10:42.075905 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.075934 kubelet[2672]: W0516 16:10:42.075922 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.075934 kubelet[2672]: E0516 16:10:42.075934 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.076110 kubelet[2672]: E0516 16:10:42.076095 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.076110 kubelet[2672]: W0516 16:10:42.076105 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.076166 kubelet[2672]: E0516 16:10:42.076113 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.076442 kubelet[2672]: E0516 16:10:42.076275 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.076442 kubelet[2672]: W0516 16:10:42.076315 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.076442 kubelet[2672]: E0516 16:10:42.076328 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.076717 kubelet[2672]: E0516 16:10:42.076680 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.076717 kubelet[2672]: W0516 16:10:42.076695 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.076717 kubelet[2672]: E0516 16:10:42.076706 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.076869 kubelet[2672]: E0516 16:10:42.076853 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.076869 kubelet[2672]: W0516 16:10:42.076864 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.076939 kubelet[2672]: E0516 16:10:42.076872 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.077903 kubelet[2672]: E0516 16:10:42.077036 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.077903 kubelet[2672]: W0516 16:10:42.077047 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.077903 kubelet[2672]: E0516 16:10:42.077056 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.077903 kubelet[2672]: E0516 16:10:42.077164 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.077903 kubelet[2672]: W0516 16:10:42.077169 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.077903 kubelet[2672]: E0516 16:10:42.077176 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.077903 kubelet[2672]: E0516 16:10:42.077281 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.077903 kubelet[2672]: W0516 16:10:42.077287 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.077903 kubelet[2672]: E0516 16:10:42.077293 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.077903 kubelet[2672]: E0516 16:10:42.077410 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.078148 kubelet[2672]: W0516 16:10:42.077417 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.078148 kubelet[2672]: E0516 16:10:42.077424 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.078148 kubelet[2672]: E0516 16:10:42.077548 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.078148 kubelet[2672]: W0516 16:10:42.077555 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.078148 kubelet[2672]: E0516 16:10:42.077561 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.078148 kubelet[2672]: E0516 16:10:42.077674 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.078148 kubelet[2672]: W0516 16:10:42.077682 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.078148 kubelet[2672]: E0516 16:10:42.077689 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.078148 kubelet[2672]: E0516 16:10:42.077792 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.078148 kubelet[2672]: W0516 16:10:42.077798 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.078339 kubelet[2672]: E0516 16:10:42.077805 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.078339 kubelet[2672]: E0516 16:10:42.078016 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.078339 kubelet[2672]: W0516 16:10:42.078025 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.078339 kubelet[2672]: E0516 16:10:42.078032 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.078339 kubelet[2672]: E0516 16:10:42.078196 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.078339 kubelet[2672]: W0516 16:10:42.078204 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.078339 kubelet[2672]: E0516 16:10:42.078216 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.078513 kubelet[2672]: E0516 16:10:42.078354 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.078513 kubelet[2672]: W0516 16:10:42.078362 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.078513 kubelet[2672]: E0516 16:10:42.078369 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.078972 kubelet[2672]: E0516 16:10:42.078518 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.078972 kubelet[2672]: W0516 16:10:42.078525 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.078972 kubelet[2672]: E0516 16:10:42.078533 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.078972 kubelet[2672]: E0516 16:10:42.078650 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.078972 kubelet[2672]: W0516 16:10:42.078656 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.078972 kubelet[2672]: E0516 16:10:42.078663 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.078972 kubelet[2672]: E0516 16:10:42.078766 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.078972 kubelet[2672]: W0516 16:10:42.078773 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.078972 kubelet[2672]: E0516 16:10:42.078780 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.078972 kubelet[2672]: E0516 16:10:42.078957 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.079176 kubelet[2672]: W0516 16:10:42.078966 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.079176 kubelet[2672]: E0516 16:10:42.078982 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.079214 kubelet[2672]: E0516 16:10:42.079190 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.079214 kubelet[2672]: W0516 16:10:42.079204 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.079255 kubelet[2672]: E0516 16:10:42.079223 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.079379 kubelet[2672]: E0516 16:10:42.079360 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.079379 kubelet[2672]: W0516 16:10:42.079371 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.079439 kubelet[2672]: E0516 16:10:42.079410 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.079545 kubelet[2672]: E0516 16:10:42.079532 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.079545 kubelet[2672]: W0516 16:10:42.079543 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.079656 kubelet[2672]: E0516 16:10:42.079564 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.079687 kubelet[2672]: E0516 16:10:42.079677 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.079711 kubelet[2672]: W0516 16:10:42.079685 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.079711 kubelet[2672]: E0516 16:10:42.079699 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.079846 kubelet[2672]: E0516 16:10:42.079832 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.079846 kubelet[2672]: W0516 16:10:42.079842 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.079910 kubelet[2672]: E0516 16:10:42.079853 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.080013 kubelet[2672]: E0516 16:10:42.080001 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.080013 kubelet[2672]: W0516 16:10:42.080011 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.080056 kubelet[2672]: E0516 16:10:42.080024 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.080226 kubelet[2672]: E0516 16:10:42.080212 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.080251 kubelet[2672]: W0516 16:10:42.080225 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.080251 kubelet[2672]: E0516 16:10:42.080239 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.080378 kubelet[2672]: E0516 16:10:42.080367 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.080378 kubelet[2672]: W0516 16:10:42.080377 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.080423 kubelet[2672]: E0516 16:10:42.080397 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.080557 kubelet[2672]: E0516 16:10:42.080546 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.080582 kubelet[2672]: W0516 16:10:42.080556 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.080582 kubelet[2672]: E0516 16:10:42.080569 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.080797 kubelet[2672]: E0516 16:10:42.080781 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.080819 kubelet[2672]: W0516 16:10:42.080798 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.080819 kubelet[2672]: E0516 16:10:42.080810 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.081008 kubelet[2672]: E0516 16:10:42.080995 2672 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 16:10:42.081038 kubelet[2672]: W0516 16:10:42.081009 2672 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 16:10:42.081038 kubelet[2672]: E0516 16:10:42.081019 2672 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 16:10:42.086472 kubelet[2672]: I0516 16:10:42.086421 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b8fb88cd-2kszj" podStartSLOduration=1.674865385 podStartE2EDuration="4.086404573s" podCreationTimestamp="2025-05-16 16:10:38 +0000 UTC" firstStartedPulling="2025-05-16 16:10:38.688638253 +0000 UTC m=+19.770534651" lastFinishedPulling="2025-05-16 16:10:41.100177441 +0000 UTC m=+22.182073839" observedRunningTime="2025-05-16 16:10:42.082705287 +0000 UTC m=+23.164601685" watchObservedRunningTime="2025-05-16 16:10:42.086404573 +0000 UTC m=+23.168301011" May 16 16:10:42.547567 containerd[1526]: time="2025-05-16T16:10:42.547512928Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:42.548207 containerd[1526]: time="2025-05-16T16:10:42.548172209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 16 16:10:42.548933 containerd[1526]: time="2025-05-16T16:10:42.548906290Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:42.550958 containerd[1526]: time="2025-05-16T16:10:42.550926533Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:42.551873 containerd[1526]: time="2025-05-16T16:10:42.551827094Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 1.451259012s" May 16 16:10:42.551873 containerd[1526]: time="2025-05-16T16:10:42.551863014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 16 16:10:42.554371 containerd[1526]: time="2025-05-16T16:10:42.554343578Z" level=info msg="CreateContainer within sandbox \"f5a84c175604c0793723ed77e672664cc56fb4e8f60e199a878c17abf3252fa5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 16:10:42.561825 containerd[1526]: time="2025-05-16T16:10:42.559527665Z" level=info msg="Container f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:42.566946 containerd[1526]: time="2025-05-16T16:10:42.566906436Z" level=info msg="CreateContainer within sandbox \"f5a84c175604c0793723ed77e672664cc56fb4e8f60e199a878c17abf3252fa5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef\"" May 16 16:10:42.567282 containerd[1526]: time="2025-05-16T16:10:42.567252677Z" level=info msg="StartContainer for \"f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef\"" May 16 16:10:42.568521 containerd[1526]: time="2025-05-16T16:10:42.568499438Z" level=info msg="connecting to shim f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef" address="unix:///run/containerd/s/4a51d3eb4d550b775ebe6ab21fcf471ed641fdb60e7880503c5482b0c94681bc" protocol=ttrpc version=3 May 16 16:10:42.590024 systemd[1]: Started cri-containerd-f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef.scope - libcontainer container f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef. May 16 16:10:42.621129 containerd[1526]: time="2025-05-16T16:10:42.621082555Z" level=info msg="StartContainer for \"f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef\" returns successfully" May 16 16:10:42.658852 systemd[1]: cri-containerd-f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef.scope: Deactivated successfully. May 16 16:10:42.675659 containerd[1526]: time="2025-05-16T16:10:42.675589515Z" level=info msg="received exit event container_id:\"f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef\" id:\"f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef\" pid:3372 exited_at:{seconds:1747411842 nanos:660911334}" May 16 16:10:42.680617 containerd[1526]: time="2025-05-16T16:10:42.680574603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef\" id:\"f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef\" pid:3372 exited_at:{seconds:1747411842 nanos:660911334}" May 16 16:10:42.711463 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f052c543ccf35bcbc532f688db6fcd858c0bf1928bc34ba0c40e246544dbebef-rootfs.mount: Deactivated successfully. May 16 16:10:43.072156 kubelet[2672]: I0516 16:10:43.072119 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:10:43.072581 kubelet[2672]: E0516 16:10:43.072388 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:43.073762 containerd[1526]: time="2025-05-16T16:10:43.073724851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 16 16:10:43.999072 kubelet[2672]: E0516 16:10:43.999007 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlzmx" podUID="d9789950-a309-4163-96c5-d67e446c252b" May 16 16:10:45.574777 kubelet[2672]: I0516 16:10:45.574140 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:10:45.574777 kubelet[2672]: E0516 16:10:45.574466 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:45.998715 kubelet[2672]: E0516 16:10:45.998596 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-vlzmx" podUID="d9789950-a309-4163-96c5-d67e446c252b" May 16 16:10:46.077984 kubelet[2672]: E0516 16:10:46.077954 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:46.811621 containerd[1526]: time="2025-05-16T16:10:46.811578173Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:46.812576 containerd[1526]: time="2025-05-16T16:10:46.812547214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 16 16:10:46.813619 containerd[1526]: time="2025-05-16T16:10:46.813579095Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:46.816190 containerd[1526]: time="2025-05-16T16:10:46.816122418Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:46.816637 containerd[1526]: time="2025-05-16T16:10:46.816602058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 3.742841567s" May 16 16:10:46.816637 containerd[1526]: time="2025-05-16T16:10:46.816635179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 16 16:10:46.818829 containerd[1526]: time="2025-05-16T16:10:46.818756981Z" level=info msg="CreateContainer within sandbox \"f5a84c175604c0793723ed77e672664cc56fb4e8f60e199a878c17abf3252fa5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 16:10:46.827916 containerd[1526]: time="2025-05-16T16:10:46.826842230Z" level=info msg="Container d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:46.836229 containerd[1526]: time="2025-05-16T16:10:46.836188801Z" level=info msg="CreateContainer within sandbox \"f5a84c175604c0793723ed77e672664cc56fb4e8f60e199a878c17abf3252fa5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518\"" May 16 16:10:46.836750 containerd[1526]: time="2025-05-16T16:10:46.836720761Z" level=info msg="StartContainer for \"d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518\"" May 16 16:10:46.838221 containerd[1526]: time="2025-05-16T16:10:46.838181043Z" level=info msg="connecting to shim d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518" address="unix:///run/containerd/s/4a51d3eb4d550b775ebe6ab21fcf471ed641fdb60e7880503c5482b0c94681bc" protocol=ttrpc version=3 May 16 16:10:46.864082 systemd[1]: Started cri-containerd-d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518.scope - libcontainer container d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518. May 16 16:10:46.899508 containerd[1526]: time="2025-05-16T16:10:46.899458672Z" level=info msg="StartContainer for \"d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518\" returns successfully" May 16 16:10:47.452588 systemd[1]: cri-containerd-d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518.scope: Deactivated successfully. May 16 16:10:47.452997 systemd[1]: cri-containerd-d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518.scope: Consumed 462ms CPU time, 180.9M memory peak, 3.5M read from disk, 165.5M written to disk. May 16 16:10:47.469562 containerd[1526]: time="2025-05-16T16:10:47.469519284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518\" id:\"d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518\" pid:3434 exited_at:{seconds:1747411847 nanos:468477203}" May 16 16:10:47.476972 containerd[1526]: time="2025-05-16T16:10:47.476906771Z" level=info msg="received exit event container_id:\"d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518\" id:\"d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518\" pid:3434 exited_at:{seconds:1747411847 nanos:468477203}" May 16 16:10:47.484642 kubelet[2672]: I0516 16:10:47.484611 2672 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 16:10:47.502326 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d14d08fe25c54c640d83d04abf9521dbcf94b059f187cde482ac6aacfc36f518-rootfs.mount: Deactivated successfully. May 16 16:10:47.620910 kubelet[2672]: I0516 16:10:47.620353 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-whisker-ca-bundle\") pod \"whisker-69fd877466-mrldt\" (UID: \"e16189b3-0b4c-4b52-a2ac-64fc0606eab1\") " pod="calico-system/whisker-69fd877466-mrldt" May 16 16:10:47.620910 kubelet[2672]: I0516 16:10:47.620603 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e264e0c-96bd-4ee4-af75-118440e86fe2-config-volume\") pod \"coredns-668d6bf9bc-5b84t\" (UID: \"5e264e0c-96bd-4ee4-af75-118440e86fe2\") " pod="kube-system/coredns-668d6bf9bc-5b84t" May 16 16:10:47.620910 kubelet[2672]: I0516 16:10:47.620802 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/052dd569-b80c-4bbb-b6f6-acc75ce14539-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-5hgdf\" (UID: \"052dd569-b80c-4bbb-b6f6-acc75ce14539\") " pod="calico-system/goldmane-78d55f7ddc-5hgdf" May 16 16:10:47.621128 kubelet[2672]: I0516 16:10:47.620910 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b3f85d7-7d18-49d6-8a32-85309c91c6cf-config-volume\") pod \"coredns-668d6bf9bc-4g676\" (UID: \"9b3f85d7-7d18-49d6-8a32-85309c91c6cf\") " pod="kube-system/coredns-668d6bf9bc-4g676" May 16 16:10:47.621128 kubelet[2672]: I0516 16:10:47.620972 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r7lt\" (UniqueName: \"kubernetes.io/projected/052dd569-b80c-4bbb-b6f6-acc75ce14539-kube-api-access-8r7lt\") pod \"goldmane-78d55f7ddc-5hgdf\" (UID: \"052dd569-b80c-4bbb-b6f6-acc75ce14539\") " pod="calico-system/goldmane-78d55f7ddc-5hgdf" May 16 16:10:47.621128 kubelet[2672]: I0516 16:10:47.620996 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/efa9987a-af66-4b09-af3c-4a5eb93dc6dc-calico-apiserver-certs\") pod \"calico-apiserver-58b5548bff-6l5gn\" (UID: \"efa9987a-af66-4b09-af3c-4a5eb93dc6dc\") " pod="calico-apiserver/calico-apiserver-58b5548bff-6l5gn" May 16 16:10:47.621128 kubelet[2672]: I0516 16:10:47.621013 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnjwv\" (UniqueName: \"kubernetes.io/projected/efa9987a-af66-4b09-af3c-4a5eb93dc6dc-kube-api-access-gnjwv\") pod \"calico-apiserver-58b5548bff-6l5gn\" (UID: \"efa9987a-af66-4b09-af3c-4a5eb93dc6dc\") " pod="calico-apiserver/calico-apiserver-58b5548bff-6l5gn" May 16 16:10:47.621128 kubelet[2672]: I0516 16:10:47.621030 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/895b517c-3cb8-4dbd-b16c-6cd9530117dd-tigera-ca-bundle\") pod \"calico-kube-controllers-85cf998847-g6kxt\" (UID: \"895b517c-3cb8-4dbd-b16c-6cd9530117dd\") " pod="calico-system/calico-kube-controllers-85cf998847-g6kxt" May 16 16:10:47.621244 kubelet[2672]: I0516 16:10:47.621047 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkk94\" (UniqueName: \"kubernetes.io/projected/5e264e0c-96bd-4ee4-af75-118440e86fe2-kube-api-access-bkk94\") pod \"coredns-668d6bf9bc-5b84t\" (UID: \"5e264e0c-96bd-4ee4-af75-118440e86fe2\") " pod="kube-system/coredns-668d6bf9bc-5b84t" May 16 16:10:47.621244 kubelet[2672]: I0516 16:10:47.621063 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zshs\" (UniqueName: \"kubernetes.io/projected/9b3f85d7-7d18-49d6-8a32-85309c91c6cf-kube-api-access-2zshs\") pod \"coredns-668d6bf9bc-4g676\" (UID: \"9b3f85d7-7d18-49d6-8a32-85309c91c6cf\") " pod="kube-system/coredns-668d6bf9bc-4g676" May 16 16:10:47.621244 kubelet[2672]: I0516 16:10:47.621082 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nm5n6\" (UniqueName: \"kubernetes.io/projected/895b517c-3cb8-4dbd-b16c-6cd9530117dd-kube-api-access-nm5n6\") pod \"calico-kube-controllers-85cf998847-g6kxt\" (UID: \"895b517c-3cb8-4dbd-b16c-6cd9530117dd\") " pod="calico-system/calico-kube-controllers-85cf998847-g6kxt" May 16 16:10:47.621244 kubelet[2672]: I0516 16:10:47.621097 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-whisker-backend-key-pair\") pod \"whisker-69fd877466-mrldt\" (UID: \"e16189b3-0b4c-4b52-a2ac-64fc0606eab1\") " pod="calico-system/whisker-69fd877466-mrldt" May 16 16:10:47.621244 kubelet[2672]: I0516 16:10:47.621116 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/052dd569-b80c-4bbb-b6f6-acc75ce14539-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-5hgdf\" (UID: \"052dd569-b80c-4bbb-b6f6-acc75ce14539\") " pod="calico-system/goldmane-78d55f7ddc-5hgdf" May 16 16:10:47.621360 kubelet[2672]: I0516 16:10:47.621135 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/052dd569-b80c-4bbb-b6f6-acc75ce14539-config\") pod \"goldmane-78d55f7ddc-5hgdf\" (UID: \"052dd569-b80c-4bbb-b6f6-acc75ce14539\") " pod="calico-system/goldmane-78d55f7ddc-5hgdf" May 16 16:10:47.621360 kubelet[2672]: I0516 16:10:47.621155 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdnb2\" (UniqueName: \"kubernetes.io/projected/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-kube-api-access-wdnb2\") pod \"whisker-69fd877466-mrldt\" (UID: \"e16189b3-0b4c-4b52-a2ac-64fc0606eab1\") " pod="calico-system/whisker-69fd877466-mrldt" May 16 16:10:47.640266 systemd[1]: Created slice kubepods-burstable-pod9b3f85d7_7d18_49d6_8a32_85309c91c6cf.slice - libcontainer container kubepods-burstable-pod9b3f85d7_7d18_49d6_8a32_85309c91c6cf.slice. May 16 16:10:47.669041 systemd[1]: Created slice kubepods-besteffort-pod895b517c_3cb8_4dbd_b16c_6cd9530117dd.slice - libcontainer container kubepods-besteffort-pod895b517c_3cb8_4dbd_b16c_6cd9530117dd.slice. May 16 16:10:47.676339 systemd[1]: Created slice kubepods-besteffort-pod052dd569_b80c_4bbb_b6f6_acc75ce14539.slice - libcontainer container kubepods-besteffort-pod052dd569_b80c_4bbb_b6f6_acc75ce14539.slice. May 16 16:10:47.680430 systemd[1]: Created slice kubepods-burstable-pod5e264e0c_96bd_4ee4_af75_118440e86fe2.slice - libcontainer container kubepods-burstable-pod5e264e0c_96bd_4ee4_af75_118440e86fe2.slice. May 16 16:10:47.688169 systemd[1]: Created slice kubepods-besteffort-pode16189b3_0b4c_4b52_a2ac_64fc0606eab1.slice - libcontainer container kubepods-besteffort-pode16189b3_0b4c_4b52_a2ac_64fc0606eab1.slice. May 16 16:10:47.694819 systemd[1]: Created slice kubepods-besteffort-podefa9987a_af66_4b09_af3c_4a5eb93dc6dc.slice - libcontainer container kubepods-besteffort-podefa9987a_af66_4b09_af3c_4a5eb93dc6dc.slice. May 16 16:10:47.700735 systemd[1]: Created slice kubepods-besteffort-podf8a0d9c3_5243_4963_a185_76df8ad5a59c.slice - libcontainer container kubepods-besteffort-podf8a0d9c3_5243_4963_a185_76df8ad5a59c.slice. May 16 16:10:47.722283 kubelet[2672]: I0516 16:10:47.722184 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8a0d9c3-5243-4963-a185-76df8ad5a59c-calico-apiserver-certs\") pod \"calico-apiserver-58b5548bff-kp9g5\" (UID: \"f8a0d9c3-5243-4963-a185-76df8ad5a59c\") " pod="calico-apiserver/calico-apiserver-58b5548bff-kp9g5" May 16 16:10:47.722490 kubelet[2672]: I0516 16:10:47.722471 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v67d\" (UniqueName: \"kubernetes.io/projected/f8a0d9c3-5243-4963-a185-76df8ad5a59c-kube-api-access-7v67d\") pod \"calico-apiserver-58b5548bff-kp9g5\" (UID: \"f8a0d9c3-5243-4963-a185-76df8ad5a59c\") " pod="calico-apiserver/calico-apiserver-58b5548bff-kp9g5" May 16 16:10:47.946157 kubelet[2672]: E0516 16:10:47.946113 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:47.946892 containerd[1526]: time="2025-05-16T16:10:47.946847550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4g676,Uid:9b3f85d7-7d18-49d6-8a32-85309c91c6cf,Namespace:kube-system,Attempt:0,}" May 16 16:10:47.973273 containerd[1526]: time="2025-05-16T16:10:47.972541737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cf998847-g6kxt,Uid:895b517c-3cb8-4dbd-b16c-6cd9530117dd,Namespace:calico-system,Attempt:0,}" May 16 16:10:47.982839 containerd[1526]: time="2025-05-16T16:10:47.979601664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5hgdf,Uid:052dd569-b80c-4bbb-b6f6-acc75ce14539,Namespace:calico-system,Attempt:0,}" May 16 16:10:47.987911 kubelet[2672]: E0516 16:10:47.984568 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:10:47.988031 containerd[1526]: time="2025-05-16T16:10:47.984983270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5b84t,Uid:5e264e0c-96bd-4ee4-af75-118440e86fe2,Namespace:kube-system,Attempt:0,}" May 16 16:10:48.000348 containerd[1526]: time="2025-05-16T16:10:47.999982966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b5548bff-6l5gn,Uid:efa9987a-af66-4b09-af3c-4a5eb93dc6dc,Namespace:calico-apiserver,Attempt:0,}" May 16 16:10:48.000348 containerd[1526]: time="2025-05-16T16:10:48.000265246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69fd877466-mrldt,Uid:e16189b3-0b4c-4b52-a2ac-64fc0606eab1,Namespace:calico-system,Attempt:0,}" May 16 16:10:48.005787 containerd[1526]: time="2025-05-16T16:10:48.004581811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b5548bff-kp9g5,Uid:f8a0d9c3-5243-4963-a185-76df8ad5a59c,Namespace:calico-apiserver,Attempt:0,}" May 16 16:10:48.037710 systemd[1]: Created slice kubepods-besteffort-podd9789950_a309_4163_96c5_d67e446c252b.slice - libcontainer container kubepods-besteffort-podd9789950_a309_4163_96c5_d67e446c252b.slice. May 16 16:10:48.049867 containerd[1526]: time="2025-05-16T16:10:48.049829216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlzmx,Uid:d9789950-a309-4163-96c5-d67e446c252b,Namespace:calico-system,Attempt:0,}" May 16 16:10:48.126377 containerd[1526]: time="2025-05-16T16:10:48.122554288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 16 16:10:48.352742 containerd[1526]: time="2025-05-16T16:10:48.352617837Z" level=error msg="Failed to destroy network for sandbox \"5218113ccb7a7dfee320a0488ffacf4b0e71a5554c2b7f69a55b0bf608dd2e98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.352742 containerd[1526]: time="2025-05-16T16:10:48.352633997Z" level=error msg="Failed to destroy network for sandbox \"4ce7316b4899bf196b9c15e4035e029dd82167b188695bff0dd679f324159748\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.354208 containerd[1526]: time="2025-05-16T16:10:48.354167398Z" level=error msg="Failed to destroy network for sandbox \"c7dd7286d60f3041d0547a92b6aadb2e903c7959db6201817bef8c1b60e6a509\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.356186 containerd[1526]: time="2025-05-16T16:10:48.356068560Z" level=error msg="Failed to destroy network for sandbox \"b5e6564b087a937741f00e8adf1932fd91e9b38144755d827565aca5f0bf454f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.358108 containerd[1526]: time="2025-05-16T16:10:48.358057482Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cf998847-g6kxt,Uid:895b517c-3cb8-4dbd-b16c-6cd9530117dd,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5218113ccb7a7dfee320a0488ffacf4b0e71a5554c2b7f69a55b0bf608dd2e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.359076 containerd[1526]: time="2025-05-16T16:10:48.359029723Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4g676,Uid:9b3f85d7-7d18-49d6-8a32-85309c91c6cf,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ce7316b4899bf196b9c15e4035e029dd82167b188695bff0dd679f324159748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.359224 kubelet[2672]: E0516 16:10:48.359073 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5218113ccb7a7dfee320a0488ffacf4b0e71a5554c2b7f69a55b0bf608dd2e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.359224 kubelet[2672]: E0516 16:10:48.359159 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5218113ccb7a7dfee320a0488ffacf4b0e71a5554c2b7f69a55b0bf608dd2e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cf998847-g6kxt" May 16 16:10:48.359224 kubelet[2672]: E0516 16:10:48.359216 2672 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5218113ccb7a7dfee320a0488ffacf4b0e71a5554c2b7f69a55b0bf608dd2e98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85cf998847-g6kxt" May 16 16:10:48.359457 kubelet[2672]: E0516 16:10:48.359212 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ce7316b4899bf196b9c15e4035e029dd82167b188695bff0dd679f324159748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.359457 kubelet[2672]: E0516 16:10:48.359292 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ce7316b4899bf196b9c15e4035e029dd82167b188695bff0dd679f324159748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4g676" May 16 16:10:48.359457 kubelet[2672]: E0516 16:10:48.359319 2672 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ce7316b4899bf196b9c15e4035e029dd82167b188695bff0dd679f324159748\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4g676" May 16 16:10:48.359532 kubelet[2672]: E0516 16:10:48.359366 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4g676_kube-system(9b3f85d7-7d18-49d6-8a32-85309c91c6cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4g676_kube-system(9b3f85d7-7d18-49d6-8a32-85309c91c6cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ce7316b4899bf196b9c15e4035e029dd82167b188695bff0dd679f324159748\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4g676" podUID="9b3f85d7-7d18-49d6-8a32-85309c91c6cf" May 16 16:10:48.359574 kubelet[2672]: E0516 16:10:48.359520 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85cf998847-g6kxt_calico-system(895b517c-3cb8-4dbd-b16c-6cd9530117dd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85cf998847-g6kxt_calico-system(895b517c-3cb8-4dbd-b16c-6cd9530117dd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5218113ccb7a7dfee320a0488ffacf4b0e71a5554c2b7f69a55b0bf608dd2e98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85cf998847-g6kxt" podUID="895b517c-3cb8-4dbd-b16c-6cd9530117dd" May 16 16:10:48.360011 containerd[1526]: time="2025-05-16T16:10:48.359968524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlzmx,Uid:d9789950-a309-4163-96c5-d67e446c252b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7dd7286d60f3041d0547a92b6aadb2e903c7959db6201817bef8c1b60e6a509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.361643 kubelet[2672]: E0516 16:10:48.361238 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7dd7286d60f3041d0547a92b6aadb2e903c7959db6201817bef8c1b60e6a509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.361643 kubelet[2672]: E0516 16:10:48.361599 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7dd7286d60f3041d0547a92b6aadb2e903c7959db6201817bef8c1b60e6a509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlzmx" May 16 16:10:48.361643 kubelet[2672]: E0516 16:10:48.361636 2672 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7dd7286d60f3041d0547a92b6aadb2e903c7959db6201817bef8c1b60e6a509\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-vlzmx" May 16 16:10:48.361807 containerd[1526]: time="2025-05-16T16:10:48.360849645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69fd877466-mrldt,Uid:e16189b3-0b4c-4b52-a2ac-64fc0606eab1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5e6564b087a937741f00e8adf1932fd91e9b38144755d827565aca5f0bf454f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.361857 kubelet[2672]: E0516 16:10:48.361763 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-vlzmx_calico-system(d9789950-a309-4163-96c5-d67e446c252b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-vlzmx_calico-system(d9789950-a309-4163-96c5-d67e446c252b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7dd7286d60f3041d0547a92b6aadb2e903c7959db6201817bef8c1b60e6a509\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-vlzmx" podUID="d9789950-a309-4163-96c5-d67e446c252b" May 16 16:10:48.363009 kubelet[2672]: E0516 16:10:48.362951 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5e6564b087a937741f00e8adf1932fd91e9b38144755d827565aca5f0bf454f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.363009 kubelet[2672]: E0516 16:10:48.362996 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5e6564b087a937741f00e8adf1932fd91e9b38144755d827565aca5f0bf454f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69fd877466-mrldt" May 16 16:10:48.363137 kubelet[2672]: E0516 16:10:48.363012 2672 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5e6564b087a937741f00e8adf1932fd91e9b38144755d827565aca5f0bf454f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69fd877466-mrldt" May 16 16:10:48.363137 kubelet[2672]: E0516 16:10:48.363062 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69fd877466-mrldt_calico-system(e16189b3-0b4c-4b52-a2ac-64fc0606eab1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69fd877466-mrldt_calico-system(e16189b3-0b4c-4b52-a2ac-64fc0606eab1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5e6564b087a937741f00e8adf1932fd91e9b38144755d827565aca5f0bf454f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69fd877466-mrldt" podUID="e16189b3-0b4c-4b52-a2ac-64fc0606eab1" May 16 16:10:48.363993 containerd[1526]: time="2025-05-16T16:10:48.362098806Z" level=error msg="Failed to destroy network for sandbox \"4a8bf79eac3dedd0e8df77d02df48e7a3bb3c15c3a6350e30fd1bb70e7ac3231\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.371075 containerd[1526]: time="2025-05-16T16:10:48.371015175Z" level=error msg="Failed to destroy network for sandbox \"383b61f9375d8aa0a8bb3719a3a445cfcf18b0e33ded6ebb2625f34798716d95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.371371 containerd[1526]: time="2025-05-16T16:10:48.371322215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b5548bff-6l5gn,Uid:efa9987a-af66-4b09-af3c-4a5eb93dc6dc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a8bf79eac3dedd0e8df77d02df48e7a3bb3c15c3a6350e30fd1bb70e7ac3231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.371582 kubelet[2672]: E0516 16:10:48.371517 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a8bf79eac3dedd0e8df77d02df48e7a3bb3c15c3a6350e30fd1bb70e7ac3231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.371622 kubelet[2672]: E0516 16:10:48.371591 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a8bf79eac3dedd0e8df77d02df48e7a3bb3c15c3a6350e30fd1bb70e7ac3231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58b5548bff-6l5gn" May 16 16:10:48.371647 kubelet[2672]: E0516 16:10:48.371610 2672 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a8bf79eac3dedd0e8df77d02df48e7a3bb3c15c3a6350e30fd1bb70e7ac3231\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58b5548bff-6l5gn" May 16 16:10:48.371682 kubelet[2672]: E0516 16:10:48.371659 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58b5548bff-6l5gn_calico-apiserver(efa9987a-af66-4b09-af3c-4a5eb93dc6dc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58b5548bff-6l5gn_calico-apiserver(efa9987a-af66-4b09-af3c-4a5eb93dc6dc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a8bf79eac3dedd0e8df77d02df48e7a3bb3c15c3a6350e30fd1bb70e7ac3231\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b5548bff-6l5gn" podUID="efa9987a-af66-4b09-af3c-4a5eb93dc6dc" May 16 16:10:48.372749 containerd[1526]: time="2025-05-16T16:10:48.372693897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5hgdf,Uid:052dd569-b80c-4bbb-b6f6-acc75ce14539,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"383b61f9375d8aa0a8bb3719a3a445cfcf18b0e33ded6ebb2625f34798716d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.372844 containerd[1526]: time="2025-05-16T16:10:48.372699617Z" level=error msg="Failed to destroy network for sandbox \"4e83c782828c950668b713c79176a474510ddd9d9556764c26171746f8b40ef8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.373436 kubelet[2672]: E0516 16:10:48.373031 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383b61f9375d8aa0a8bb3719a3a445cfcf18b0e33ded6ebb2625f34798716d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.373436 kubelet[2672]: E0516 16:10:48.373071 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383b61f9375d8aa0a8bb3719a3a445cfcf18b0e33ded6ebb2625f34798716d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-5hgdf" May 16 16:10:48.373436 kubelet[2672]: E0516 16:10:48.373090 2672 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"383b61f9375d8aa0a8bb3719a3a445cfcf18b0e33ded6ebb2625f34798716d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-5hgdf" May 16 16:10:48.373555 kubelet[2672]: E0516 16:10:48.373124 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-5hgdf_calico-system(052dd569-b80c-4bbb-b6f6-acc75ce14539)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-5hgdf_calico-system(052dd569-b80c-4bbb-b6f6-acc75ce14539)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"383b61f9375d8aa0a8bb3719a3a445cfcf18b0e33ded6ebb2625f34798716d95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-5hgdf" podUID="052dd569-b80c-4bbb-b6f6-acc75ce14539" May 16 16:10:48.373885 containerd[1526]: time="2025-05-16T16:10:48.373824858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b5548bff-kp9g5,Uid:f8a0d9c3-5243-4963-a185-76df8ad5a59c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e83c782828c950668b713c79176a474510ddd9d9556764c26171746f8b40ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.374057 kubelet[2672]: E0516 16:10:48.374003 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e83c782828c950668b713c79176a474510ddd9d9556764c26171746f8b40ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.374114 kubelet[2672]: E0516 16:10:48.374072 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e83c782828c950668b713c79176a474510ddd9d9556764c26171746f8b40ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58b5548bff-kp9g5" May 16 16:10:48.374144 kubelet[2672]: E0516 16:10:48.374118 2672 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e83c782828c950668b713c79176a474510ddd9d9556764c26171746f8b40ef8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58b5548bff-kp9g5" May 16 16:10:48.374304 kubelet[2672]: E0516 16:10:48.374280 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58b5548bff-kp9g5_calico-apiserver(f8a0d9c3-5243-4963-a185-76df8ad5a59c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58b5548bff-kp9g5_calico-apiserver(f8a0d9c3-5243-4963-a185-76df8ad5a59c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e83c782828c950668b713c79176a474510ddd9d9556764c26171746f8b40ef8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58b5548bff-kp9g5" podUID="f8a0d9c3-5243-4963-a185-76df8ad5a59c" May 16 16:10:48.377091 containerd[1526]: time="2025-05-16T16:10:48.376800341Z" level=error msg="Failed to destroy network for sandbox \"d3c3f8a7843dd53c07776943776ec492dd7f1c8fc13c2e2a23042e5c311338db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.378262 containerd[1526]: time="2025-05-16T16:10:48.378052422Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5b84t,Uid:5e264e0c-96bd-4ee4-af75-118440e86fe2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c3f8a7843dd53c07776943776ec492dd7f1c8fc13c2e2a23042e5c311338db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.378353 kubelet[2672]: E0516 16:10:48.378233 2672 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c3f8a7843dd53c07776943776ec492dd7f1c8fc13c2e2a23042e5c311338db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 16:10:48.378353 kubelet[2672]: E0516 16:10:48.378288 2672 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c3f8a7843dd53c07776943776ec492dd7f1c8fc13c2e2a23042e5c311338db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5b84t" May 16 16:10:48.378353 kubelet[2672]: E0516 16:10:48.378307 2672 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d3c3f8a7843dd53c07776943776ec492dd7f1c8fc13c2e2a23042e5c311338db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-5b84t" May 16 16:10:48.378435 kubelet[2672]: E0516 16:10:48.378348 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-5b84t_kube-system(5e264e0c-96bd-4ee4-af75-118440e86fe2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-5b84t_kube-system(5e264e0c-96bd-4ee4-af75-118440e86fe2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d3c3f8a7843dd53c07776943776ec492dd7f1c8fc13c2e2a23042e5c311338db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-5b84t" podUID="5e264e0c-96bd-4ee4-af75-118440e86fe2" May 16 16:10:48.828352 systemd[1]: run-netns-cni\x2d98421020\x2d13e7\x2dae6a\x2da503\x2d223ab7d04f41.mount: Deactivated successfully. May 16 16:10:48.828439 systemd[1]: run-netns-cni\x2d8b0b0e12\x2dce93\x2dd2d6\x2d6be9\x2de031f59ce03d.mount: Deactivated successfully. May 16 16:10:48.828484 systemd[1]: run-netns-cni\x2d96845bd7\x2d9a71\x2d5080\x2dff2e\x2dffc17ec68dfe.mount: Deactivated successfully. May 16 16:10:52.496029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount685920694.mount: Deactivated successfully. May 16 16:10:52.638842 containerd[1526]: time="2025-05-16T16:10:52.638787475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 16 16:10:52.640049 containerd[1526]: time="2025-05-16T16:10:52.639994036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:52.641286 containerd[1526]: time="2025-05-16T16:10:52.640646156Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:52.642676 containerd[1526]: time="2025-05-16T16:10:52.642644238Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:10:52.643757 containerd[1526]: time="2025-05-16T16:10:52.643727678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 4.52113807s" May 16 16:10:52.643862 containerd[1526]: time="2025-05-16T16:10:52.643846198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 16 16:10:52.655080 containerd[1526]: time="2025-05-16T16:10:52.655046647Z" level=info msg="CreateContainer within sandbox \"f5a84c175604c0793723ed77e672664cc56fb4e8f60e199a878c17abf3252fa5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 16:10:52.661459 containerd[1526]: time="2025-05-16T16:10:52.661423932Z" level=info msg="Container 45cc5af06812392e400e673e49dceb8e0e24b2eb5465ca86cafb91c8ca01e30a: CDI devices from CRI Config.CDIDevices: []" May 16 16:10:52.703553 containerd[1526]: time="2025-05-16T16:10:52.703507644Z" level=info msg="CreateContainer within sandbox \"f5a84c175604c0793723ed77e672664cc56fb4e8f60e199a878c17abf3252fa5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"45cc5af06812392e400e673e49dceb8e0e24b2eb5465ca86cafb91c8ca01e30a\"" May 16 16:10:52.704368 containerd[1526]: time="2025-05-16T16:10:52.704189765Z" level=info msg="StartContainer for \"45cc5af06812392e400e673e49dceb8e0e24b2eb5465ca86cafb91c8ca01e30a\"" May 16 16:10:52.705824 containerd[1526]: time="2025-05-16T16:10:52.705789326Z" level=info msg="connecting to shim 45cc5af06812392e400e673e49dceb8e0e24b2eb5465ca86cafb91c8ca01e30a" address="unix:///run/containerd/s/4a51d3eb4d550b775ebe6ab21fcf471ed641fdb60e7880503c5482b0c94681bc" protocol=ttrpc version=3 May 16 16:10:52.729039 systemd[1]: Started cri-containerd-45cc5af06812392e400e673e49dceb8e0e24b2eb5465ca86cafb91c8ca01e30a.scope - libcontainer container 45cc5af06812392e400e673e49dceb8e0e24b2eb5465ca86cafb91c8ca01e30a. May 16 16:10:52.765728 containerd[1526]: time="2025-05-16T16:10:52.765630972Z" level=info msg="StartContainer for \"45cc5af06812392e400e673e49dceb8e0e24b2eb5465ca86cafb91c8ca01e30a\" returns successfully" May 16 16:10:52.969369 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 16:10:52.969461 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 16:10:53.255361 kubelet[2672]: I0516 16:10:53.254623 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-whisker-ca-bundle\") pod \"e16189b3-0b4c-4b52-a2ac-64fc0606eab1\" (UID: \"e16189b3-0b4c-4b52-a2ac-64fc0606eab1\") " May 16 16:10:53.255361 kubelet[2672]: I0516 16:10:53.254671 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-whisker-backend-key-pair\") pod \"e16189b3-0b4c-4b52-a2ac-64fc0606eab1\" (UID: \"e16189b3-0b4c-4b52-a2ac-64fc0606eab1\") " May 16 16:10:53.255361 kubelet[2672]: I0516 16:10:53.254700 2672 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wdnb2\" (UniqueName: \"kubernetes.io/projected/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-kube-api-access-wdnb2\") pod \"e16189b3-0b4c-4b52-a2ac-64fc0606eab1\" (UID: \"e16189b3-0b4c-4b52-a2ac-64fc0606eab1\") " May 16 16:10:53.259688 kubelet[2672]: I0516 16:10:53.259649 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e16189b3-0b4c-4b52-a2ac-64fc0606eab1" (UID: "e16189b3-0b4c-4b52-a2ac-64fc0606eab1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 16:10:53.268887 kubelet[2672]: I0516 16:10:53.268846 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e16189b3-0b4c-4b52-a2ac-64fc0606eab1" (UID: "e16189b3-0b4c-4b52-a2ac-64fc0606eab1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 16:10:53.269300 kubelet[2672]: I0516 16:10:53.269265 2672 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-kube-api-access-wdnb2" (OuterVolumeSpecName: "kube-api-access-wdnb2") pod "e16189b3-0b4c-4b52-a2ac-64fc0606eab1" (UID: "e16189b3-0b4c-4b52-a2ac-64fc0606eab1"). InnerVolumeSpecName "kube-api-access-wdnb2". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 16:10:53.355619 kubelet[2672]: I0516 16:10:53.355559 2672 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 16 16:10:53.355619 kubelet[2672]: I0516 16:10:53.355591 2672 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 16 16:10:53.355619 kubelet[2672]: I0516 16:10:53.355600 2672 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wdnb2\" (UniqueName: \"kubernetes.io/projected/e16189b3-0b4c-4b52-a2ac-64fc0606eab1-kube-api-access-wdnb2\") on node \"localhost\" DevicePath \"\"" May 16 16:10:53.439847 systemd[1]: Removed slice kubepods-besteffort-pode16189b3_0b4c_4b52_a2ac_64fc0606eab1.slice - libcontainer container kubepods-besteffort-pode16189b3_0b4c_4b52_a2ac_64fc0606eab1.slice. May 16 16:10:53.450254 kubelet[2672]: I0516 16:10:53.450179 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-rvsn9" podStartSLOduration=1.634542476 podStartE2EDuration="15.450162956s" podCreationTimestamp="2025-05-16 16:10:38 +0000 UTC" firstStartedPulling="2025-05-16 16:10:38.828921439 +0000 UTC m=+19.910817837" lastFinishedPulling="2025-05-16 16:10:52.644541919 +0000 UTC m=+33.726438317" observedRunningTime="2025-05-16 16:10:53.154709103 +0000 UTC m=+34.236605501" watchObservedRunningTime="2025-05-16 16:10:53.450162956 +0000 UTC m=+34.532059394" May 16 16:10:53.486074 systemd[1]: Created slice kubepods-besteffort-pod72e24a37_c2d8_434b_8faf_2f5d0bed4d82.slice - libcontainer container kubepods-besteffort-pod72e24a37_c2d8_434b_8faf_2f5d0bed4d82.slice. May 16 16:10:53.497142 systemd[1]: var-lib-kubelet-pods-e16189b3\x2d0b4c\x2d4b52\x2da2ac\x2d64fc0606eab1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwdnb2.mount: Deactivated successfully. May 16 16:10:53.497236 systemd[1]: var-lib-kubelet-pods-e16189b3\x2d0b4c\x2d4b52\x2da2ac\x2d64fc0606eab1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 16 16:10:53.557535 kubelet[2672]: I0516 16:10:53.557423 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/72e24a37-c2d8-434b-8faf-2f5d0bed4d82-whisker-backend-key-pair\") pod \"whisker-744466594d-5kjch\" (UID: \"72e24a37-c2d8-434b-8faf-2f5d0bed4d82\") " pod="calico-system/whisker-744466594d-5kjch" May 16 16:10:53.557823 kubelet[2672]: I0516 16:10:53.557730 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcjv8\" (UniqueName: \"kubernetes.io/projected/72e24a37-c2d8-434b-8faf-2f5d0bed4d82-kube-api-access-bcjv8\") pod \"whisker-744466594d-5kjch\" (UID: \"72e24a37-c2d8-434b-8faf-2f5d0bed4d82\") " pod="calico-system/whisker-744466594d-5kjch" May 16 16:10:53.557823 kubelet[2672]: I0516 16:10:53.557763 2672 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72e24a37-c2d8-434b-8faf-2f5d0bed4d82-whisker-ca-bundle\") pod \"whisker-744466594d-5kjch\" (UID: \"72e24a37-c2d8-434b-8faf-2f5d0bed4d82\") " pod="calico-system/whisker-744466594d-5kjch" May 16 16:10:53.790843 containerd[1526]: time="2025-05-16T16:10:53.790790521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-744466594d-5kjch,Uid:72e24a37-c2d8-434b-8faf-2f5d0bed4d82,Namespace:calico-system,Attempt:0,}" May 16 16:10:54.025674 systemd-networkd[1446]: calid7f1a21f947: Link UP May 16 16:10:54.025959 systemd-networkd[1446]: calid7f1a21f947: Gained carrier May 16 16:10:54.039136 containerd[1526]: 2025-05-16 16:10:53.813 [INFO][3814] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 16:10:54.039136 containerd[1526]: 2025-05-16 16:10:53.864 [INFO][3814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--744466594d--5kjch-eth0 whisker-744466594d- calico-system 72e24a37-c2d8-434b-8faf-2f5d0bed4d82 878 0 2025-05-16 16:10:53 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:744466594d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-744466594d-5kjch eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid7f1a21f947 [] [] }} ContainerID="4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" Namespace="calico-system" Pod="whisker-744466594d-5kjch" WorkloadEndpoint="localhost-k8s-whisker--744466594d--5kjch-" May 16 16:10:54.039136 containerd[1526]: 2025-05-16 16:10:53.864 [INFO][3814] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" Namespace="calico-system" Pod="whisker-744466594d-5kjch" WorkloadEndpoint="localhost-k8s-whisker--744466594d--5kjch-eth0" May 16 16:10:54.039136 containerd[1526]: 2025-05-16 16:10:53.979 [INFO][3828] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" HandleID="k8s-pod-network.4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" Workload="localhost-k8s-whisker--744466594d--5kjch-eth0" May 16 16:10:54.039342 containerd[1526]: 2025-05-16 16:10:53.979 [INFO][3828] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" HandleID="k8s-pod-network.4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" Workload="localhost-k8s-whisker--744466594d--5kjch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d990), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-744466594d-5kjch", "timestamp":"2025-05-16 16:10:53.979610537 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:10:54.039342 containerd[1526]: 2025-05-16 16:10:53.979 [INFO][3828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:10:54.039342 containerd[1526]: 2025-05-16 16:10:53.979 [INFO][3828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:10:54.039342 containerd[1526]: 2025-05-16 16:10:53.980 [INFO][3828] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:10:54.039342 containerd[1526]: 2025-05-16 16:10:53.991 [INFO][3828] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" host="localhost" May 16 16:10:54.039342 containerd[1526]: 2025-05-16 16:10:53.996 [INFO][3828] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:10:54.039342 containerd[1526]: 2025-05-16 16:10:54.001 [INFO][3828] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:10:54.039342 containerd[1526]: 2025-05-16 16:10:54.003 [INFO][3828] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:10:54.039342 containerd[1526]: 2025-05-16 16:10:54.005 [INFO][3828] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:10:54.039342 containerd[1526]: 2025-05-16 16:10:54.005 [INFO][3828] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" host="localhost" May 16 16:10:54.039544 containerd[1526]: 2025-05-16 16:10:54.006 [INFO][3828] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610 May 16 16:10:54.039544 containerd[1526]: 2025-05-16 16:10:54.010 [INFO][3828] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" host="localhost" May 16 16:10:54.039544 containerd[1526]: 2025-05-16 16:10:54.015 [INFO][3828] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" host="localhost" May 16 16:10:54.039544 containerd[1526]: 2025-05-16 16:10:54.015 [INFO][3828] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" host="localhost" May 16 16:10:54.039544 containerd[1526]: 2025-05-16 16:10:54.015 [INFO][3828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:10:54.039544 containerd[1526]: 2025-05-16 16:10:54.015 [INFO][3828] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" HandleID="k8s-pod-network.4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" Workload="localhost-k8s-whisker--744466594d--5kjch-eth0" May 16 16:10:54.039651 containerd[1526]: 2025-05-16 16:10:54.017 [INFO][3814] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" Namespace="calico-system" Pod="whisker-744466594d-5kjch" WorkloadEndpoint="localhost-k8s-whisker--744466594d--5kjch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--744466594d--5kjch-eth0", GenerateName:"whisker-744466594d-", Namespace:"calico-system", SelfLink:"", UID:"72e24a37-c2d8-434b-8faf-2f5d0bed4d82", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"744466594d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-744466594d-5kjch", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid7f1a21f947", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:10:54.039651 containerd[1526]: 2025-05-16 16:10:54.017 [INFO][3814] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" Namespace="calico-system" Pod="whisker-744466594d-5kjch" WorkloadEndpoint="localhost-k8s-whisker--744466594d--5kjch-eth0" May 16 16:10:54.039712 containerd[1526]: 2025-05-16 16:10:54.017 [INFO][3814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid7f1a21f947 ContainerID="4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" Namespace="calico-system" Pod="whisker-744466594d-5kjch" WorkloadEndpoint="localhost-k8s-whisker--744466594d--5kjch-eth0" May 16 16:10:54.039712 containerd[1526]: 2025-05-16 16:10:54.026 [INFO][3814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" Namespace="calico-system" Pod="whisker-744466594d-5kjch" WorkloadEndpoint="localhost-k8s-whisker--744466594d--5kjch-eth0" May 16 16:10:54.039746 containerd[1526]: 2025-05-16 16:10:54.026 [INFO][3814] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" Namespace="calico-system" Pod="whisker-744466594d-5kjch" WorkloadEndpoint="localhost-k8s-whisker--744466594d--5kjch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--744466594d--5kjch-eth0", GenerateName:"whisker-744466594d-", Namespace:"calico-system", SelfLink:"", UID:"72e24a37-c2d8-434b-8faf-2f5d0bed4d82", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"744466594d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610", Pod:"whisker-744466594d-5kjch", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid7f1a21f947", MAC:"aa:c3:0c:95:c3:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:10:54.039788 containerd[1526]: 2025-05-16 16:10:54.037 [INFO][3814] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" Namespace="calico-system" Pod="whisker-744466594d-5kjch" WorkloadEndpoint="localhost-k8s-whisker--744466594d--5kjch-eth0" May 16 16:10:54.089395 containerd[1526]: time="2025-05-16T16:10:54.089353212Z" level=info msg="connecting to shim 4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610" address="unix:///run/containerd/s/c5a567ba76f195ef20a5f704669dcc31b008365091cd555f740b2296f4d52dd6" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:54.120030 systemd[1]: Started cri-containerd-4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610.scope - libcontainer container 4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610. May 16 16:10:54.147044 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:10:54.169097 containerd[1526]: time="2025-05-16T16:10:54.169049106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-744466594d-5kjch,Uid:72e24a37-c2d8-434b-8faf-2f5d0bed4d82,Namespace:calico-system,Attempt:0,} returns sandbox id \"4d88cfafda3f3c2eedffe1dc9bdc08bbb45223f6b8afcd543306334a3baac610\"" May 16 16:10:54.172252 containerd[1526]: time="2025-05-16T16:10:54.172195748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 16:10:54.360897 containerd[1526]: time="2025-05-16T16:10:54.360799675Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:10:54.361767 containerd[1526]: time="2025-05-16T16:10:54.361721396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:10:54.361837 containerd[1526]: time="2025-05-16T16:10:54.361740876Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 16:10:54.362060 kubelet[2672]: E0516 16:10:54.362012 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:10:54.366323 kubelet[2672]: E0516 16:10:54.366260 2672 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:10:54.377423 kubelet[2672]: E0516 16:10:54.377337 2672 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:88499e8d6d8d40fe9d65d26d7f0a4b23,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcjv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-744466594d-5kjch_calico-system(72e24a37-c2d8-434b-8faf-2f5d0bed4d82): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:10:54.379666 containerd[1526]: time="2025-05-16T16:10:54.379622888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 16:10:54.463855 containerd[1526]: time="2025-05-16T16:10:54.463112544Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45cc5af06812392e400e673e49dceb8e0e24b2eb5465ca86cafb91c8ca01e30a\" id:\"bf927cb362640b19291f435526f6de893b3f0be7ad62ab36fb2ff80082097032\" pid:3900 exit_status:1 exited_at:{seconds:1747411854 nanos:462784384}" May 16 16:10:54.548924 containerd[1526]: time="2025-05-16T16:10:54.548856842Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:10:54.549952 containerd[1526]: time="2025-05-16T16:10:54.549825203Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:10:54.549952 containerd[1526]: time="2025-05-16T16:10:54.549851883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 16:10:54.550138 kubelet[2672]: E0516 16:10:54.550098 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:10:54.550201 kubelet[2672]: E0516 16:10:54.550147 2672 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:10:54.550364 kubelet[2672]: E0516 16:10:54.550285 2672 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bcjv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-744466594d-5kjch_calico-system(72e24a37-c2d8-434b-8faf-2f5d0bed4d82): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:10:54.551645 kubelet[2672]: E0516 16:10:54.551578 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-744466594d-5kjch" podUID="72e24a37-c2d8-434b-8faf-2f5d0bed4d82" May 16 16:10:54.647570 systemd-networkd[1446]: vxlan.calico: Link UP May 16 16:10:54.647577 systemd-networkd[1446]: vxlan.calico: Gained carrier May 16 16:10:55.001074 kubelet[2672]: I0516 16:10:55.000982 2672 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e16189b3-0b4c-4b52-a2ac-64fc0606eab1" path="/var/lib/kubelet/pods/e16189b3-0b4c-4b52-a2ac-64fc0606eab1/volumes" May 16 16:10:55.138992 kubelet[2672]: E0516 16:10:55.138923 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-744466594d-5kjch" podUID="72e24a37-c2d8-434b-8faf-2f5d0bed4d82" May 16 16:10:55.223958 containerd[1526]: time="2025-05-16T16:10:55.223904368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45cc5af06812392e400e673e49dceb8e0e24b2eb5465ca86cafb91c8ca01e30a\" id:\"24324b4b1eb8f8072cf1ed92a1b25d49891692194b2f8563989f7291235b2d2d\" pid:4121 exit_status:1 exited_at:{seconds:1747411855 nanos:223589768}" May 16 16:10:55.739020 systemd-networkd[1446]: calid7f1a21f947: Gained IPv6LL May 16 16:10:55.931009 systemd-networkd[1446]: vxlan.calico: Gained IPv6LL May 16 16:10:56.141302 kubelet[2672]: E0516 16:10:56.141055 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-744466594d-5kjch" podUID="72e24a37-c2d8-434b-8faf-2f5d0bed4d82" May 16 16:10:58.999335 containerd[1526]: time="2025-05-16T16:10:58.999292529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b5548bff-6l5gn,Uid:efa9987a-af66-4b09-af3c-4a5eb93dc6dc,Namespace:calico-apiserver,Attempt:0,}" May 16 16:10:59.228585 systemd-networkd[1446]: cali0d687cf5a83: Link UP May 16 16:10:59.229182 systemd-networkd[1446]: cali0d687cf5a83: Gained carrier May 16 16:10:59.247287 containerd[1526]: 2025-05-16 16:10:59.122 [INFO][4144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0 calico-apiserver-58b5548bff- calico-apiserver efa9987a-af66-4b09-af3c-4a5eb93dc6dc 811 0 2025-05-16 16:10:35 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58b5548bff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-58b5548bff-6l5gn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0d687cf5a83 [] [] }} ContainerID="14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-6l5gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--6l5gn-" May 16 16:10:59.247287 containerd[1526]: 2025-05-16 16:10:59.122 [INFO][4144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-6l5gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0" May 16 16:10:59.247287 containerd[1526]: 2025-05-16 16:10:59.182 [INFO][4159] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" HandleID="k8s-pod-network.14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" Workload="localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0" May 16 16:10:59.247466 containerd[1526]: 2025-05-16 16:10:59.182 [INFO][4159] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" HandleID="k8s-pod-network.14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" Workload="localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001224e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-58b5548bff-6l5gn", "timestamp":"2025-05-16 16:10:59.182351418 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:10:59.247466 containerd[1526]: 2025-05-16 16:10:59.182 [INFO][4159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:10:59.247466 containerd[1526]: 2025-05-16 16:10:59.182 [INFO][4159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:10:59.247466 containerd[1526]: 2025-05-16 16:10:59.182 [INFO][4159] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:10:59.247466 containerd[1526]: 2025-05-16 16:10:59.192 [INFO][4159] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" host="localhost" May 16 16:10:59.247466 containerd[1526]: 2025-05-16 16:10:59.206 [INFO][4159] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:10:59.247466 containerd[1526]: 2025-05-16 16:10:59.211 [INFO][4159] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:10:59.247466 containerd[1526]: 2025-05-16 16:10:59.212 [INFO][4159] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:10:59.247466 containerd[1526]: 2025-05-16 16:10:59.214 [INFO][4159] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:10:59.247466 containerd[1526]: 2025-05-16 16:10:59.214 [INFO][4159] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" host="localhost" May 16 16:10:59.247692 containerd[1526]: 2025-05-16 16:10:59.216 [INFO][4159] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8 May 16 16:10:59.247692 containerd[1526]: 2025-05-16 16:10:59.219 [INFO][4159] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" host="localhost" May 16 16:10:59.247692 containerd[1526]: 2025-05-16 16:10:59.224 [INFO][4159] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" host="localhost" May 16 16:10:59.247692 containerd[1526]: 2025-05-16 16:10:59.224 [INFO][4159] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" host="localhost" May 16 16:10:59.247692 containerd[1526]: 2025-05-16 16:10:59.224 [INFO][4159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:10:59.247692 containerd[1526]: 2025-05-16 16:10:59.224 [INFO][4159] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" HandleID="k8s-pod-network.14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" Workload="localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0" May 16 16:10:59.247816 containerd[1526]: 2025-05-16 16:10:59.227 [INFO][4144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-6l5gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0", GenerateName:"calico-apiserver-58b5548bff-", Namespace:"calico-apiserver", SelfLink:"", UID:"efa9987a-af66-4b09-af3c-4a5eb93dc6dc", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b5548bff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-58b5548bff-6l5gn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d687cf5a83", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:10:59.247863 containerd[1526]: 2025-05-16 16:10:59.227 [INFO][4144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-6l5gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0" May 16 16:10:59.247863 containerd[1526]: 2025-05-16 16:10:59.227 [INFO][4144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d687cf5a83 ContainerID="14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-6l5gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0" May 16 16:10:59.247863 containerd[1526]: 2025-05-16 16:10:59.229 [INFO][4144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-6l5gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0" May 16 16:10:59.247962 containerd[1526]: 2025-05-16 16:10:59.229 [INFO][4144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-6l5gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0", GenerateName:"calico-apiserver-58b5548bff-", Namespace:"calico-apiserver", SelfLink:"", UID:"efa9987a-af66-4b09-af3c-4a5eb93dc6dc", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b5548bff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8", Pod:"calico-apiserver-58b5548bff-6l5gn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0d687cf5a83", MAC:"12:96:4a:be:ea:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:10:59.248013 containerd[1526]: 2025-05-16 16:10:59.239 [INFO][4144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-6l5gn" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--6l5gn-eth0" May 16 16:10:59.267860 containerd[1526]: time="2025-05-16T16:10:59.267342780Z" level=info msg="connecting to shim 14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8" address="unix:///run/containerd/s/879f3369db8763b08a164b7390a308fc9480a669ab7433f06fa6b1682c474bf0" namespace=k8s.io protocol=ttrpc version=3 May 16 16:10:59.299034 systemd[1]: Started cri-containerd-14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8.scope - libcontainer container 14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8. May 16 16:10:59.316717 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:10:59.361305 containerd[1526]: time="2025-05-16T16:10:59.361253426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b5548bff-6l5gn,Uid:efa9987a-af66-4b09-af3c-4a5eb93dc6dc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8\"" May 16 16:10:59.362755 containerd[1526]: time="2025-05-16T16:10:59.362727626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 16:10:59.530266 systemd[1]: Started sshd@9-10.0.0.50:22-10.0.0.1:41486.service - OpenSSH per-connection server daemon (10.0.0.1:41486). May 16 16:10:59.592109 sshd[4229]: Accepted publickey for core from 10.0.0.1 port 41486 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:10:59.593736 sshd-session[4229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:10:59.597955 systemd-logind[1508]: New session 10 of user core. May 16 16:10:59.611152 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 16:10:59.766392 sshd[4231]: Connection closed by 10.0.0.1 port 41486 May 16 16:10:59.766695 sshd-session[4229]: pam_unix(sshd:session): session closed for user core May 16 16:10:59.770022 systemd[1]: sshd@9-10.0.0.50:22-10.0.0.1:41486.service: Deactivated successfully. May 16 16:10:59.772092 systemd[1]: session-10.scope: Deactivated successfully. May 16 16:10:59.773287 systemd-logind[1508]: Session 10 logged out. Waiting for processes to exit. May 16 16:10:59.774234 systemd-logind[1508]: Removed session 10. May 16 16:11:00.411024 systemd-networkd[1446]: cali0d687cf5a83: Gained IPv6LL May 16 16:11:01.005240 kubelet[2672]: E0516 16:11:01.005189 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:01.007242 containerd[1526]: time="2025-05-16T16:11:01.007002039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cf998847-g6kxt,Uid:895b517c-3cb8-4dbd-b16c-6cd9530117dd,Namespace:calico-system,Attempt:0,}" May 16 16:11:01.008609 containerd[1526]: time="2025-05-16T16:11:01.007148519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5b84t,Uid:5e264e0c-96bd-4ee4-af75-118440e86fe2,Namespace:kube-system,Attempt:0,}" May 16 16:11:01.148543 systemd-networkd[1446]: calic4fb829381a: Link UP May 16 16:11:01.149296 systemd-networkd[1446]: calic4fb829381a: Gained carrier May 16 16:11:01.164930 containerd[1526]: 2025-05-16 16:11:01.053 [INFO][4260] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--5b84t-eth0 coredns-668d6bf9bc- kube-system 5e264e0c-96bd-4ee4-af75-118440e86fe2 814 0 2025-05-16 16:10:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-5b84t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic4fb829381a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b84t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b84t-" May 16 16:11:01.164930 containerd[1526]: 2025-05-16 16:11:01.053 [INFO][4260] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b84t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b84t-eth0" May 16 16:11:01.164930 containerd[1526]: 2025-05-16 16:11:01.103 [INFO][4281] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" HandleID="k8s-pod-network.92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" Workload="localhost-k8s-coredns--668d6bf9bc--5b84t-eth0" May 16 16:11:01.165341 containerd[1526]: 2025-05-16 16:11:01.103 [INFO][4281] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" HandleID="k8s-pod-network.92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" Workload="localhost-k8s-coredns--668d6bf9bc--5b84t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034f880), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-5b84t", "timestamp":"2025-05-16 16:11:01.103041 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:11:01.165341 containerd[1526]: 2025-05-16 16:11:01.103 [INFO][4281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:11:01.165341 containerd[1526]: 2025-05-16 16:11:01.103 [INFO][4281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:11:01.165341 containerd[1526]: 2025-05-16 16:11:01.103 [INFO][4281] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:11:01.165341 containerd[1526]: 2025-05-16 16:11:01.114 [INFO][4281] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" host="localhost" May 16 16:11:01.165341 containerd[1526]: 2025-05-16 16:11:01.119 [INFO][4281] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:11:01.165341 containerd[1526]: 2025-05-16 16:11:01.124 [INFO][4281] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:11:01.165341 containerd[1526]: 2025-05-16 16:11:01.126 [INFO][4281] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:11:01.165341 containerd[1526]: 2025-05-16 16:11:01.128 [INFO][4281] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:11:01.165341 containerd[1526]: 2025-05-16 16:11:01.129 [INFO][4281] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" host="localhost" May 16 16:11:01.166096 containerd[1526]: 2025-05-16 16:11:01.130 [INFO][4281] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7 May 16 16:11:01.166096 containerd[1526]: 2025-05-16 16:11:01.134 [INFO][4281] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" host="localhost" May 16 16:11:01.166096 containerd[1526]: 2025-05-16 16:11:01.140 [INFO][4281] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" host="localhost" May 16 16:11:01.166096 containerd[1526]: 2025-05-16 16:11:01.140 [INFO][4281] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" host="localhost" May 16 16:11:01.166096 containerd[1526]: 2025-05-16 16:11:01.140 [INFO][4281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:11:01.166096 containerd[1526]: 2025-05-16 16:11:01.140 [INFO][4281] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" HandleID="k8s-pod-network.92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" Workload="localhost-k8s-coredns--668d6bf9bc--5b84t-eth0" May 16 16:11:01.166217 containerd[1526]: 2025-05-16 16:11:01.144 [INFO][4260] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b84t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b84t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5b84t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e264e0c-96bd-4ee4-af75-118440e86fe2", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-5b84t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4fb829381a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:01.166287 containerd[1526]: 2025-05-16 16:11:01.144 [INFO][4260] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b84t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b84t-eth0" May 16 16:11:01.166287 containerd[1526]: 2025-05-16 16:11:01.145 [INFO][4260] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic4fb829381a ContainerID="92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b84t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b84t-eth0" May 16 16:11:01.166287 containerd[1526]: 2025-05-16 16:11:01.150 [INFO][4260] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b84t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b84t-eth0" May 16 16:11:01.166354 containerd[1526]: 2025-05-16 16:11:01.151 [INFO][4260] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b84t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b84t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--5b84t-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5e264e0c-96bd-4ee4-af75-118440e86fe2", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7", Pod:"coredns-668d6bf9bc-5b84t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic4fb829381a", MAC:"ae:44:b7:a5:c0:7d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:01.166354 containerd[1526]: 2025-05-16 16:11:01.162 [INFO][4260] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" Namespace="kube-system" Pod="coredns-668d6bf9bc-5b84t" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--5b84t-eth0" May 16 16:11:01.209032 containerd[1526]: time="2025-05-16T16:11:01.208986806Z" level=info msg="connecting to shim 92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7" address="unix:///run/containerd/s/e27eccbfc162815fb6eda6c863383e7058c11ed7a07a791573d9b665f3d1d46f" namespace=k8s.io protocol=ttrpc version=3 May 16 16:11:01.249118 systemd[1]: Started cri-containerd-92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7.scope - libcontainer container 92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7. May 16 16:11:01.250754 systemd-networkd[1446]: cali255fa01a1d7: Link UP May 16 16:11:01.251506 systemd-networkd[1446]: cali255fa01a1d7: Gained carrier May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.082 [INFO][4255] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0 calico-kube-controllers-85cf998847- calico-system 895b517c-3cb8-4dbd-b16c-6cd9530117dd 812 0 2025-05-16 16:10:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85cf998847 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-85cf998847-g6kxt eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali255fa01a1d7 [] [] }} ContainerID="39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" Namespace="calico-system" Pod="calico-kube-controllers-85cf998847-g6kxt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.082 [INFO][4255] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" Namespace="calico-system" Pod="calico-kube-controllers-85cf998847-g6kxt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.121 [INFO][4290] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" HandleID="k8s-pod-network.39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" Workload="localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.122 [INFO][4290] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" HandleID="k8s-pod-network.39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" Workload="localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d770), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-85cf998847-g6kxt", "timestamp":"2025-05-16 16:11:01.121948088 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.122 [INFO][4290] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.140 [INFO][4290] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.140 [INFO][4290] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.213 [INFO][4290] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" host="localhost" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.220 [INFO][4290] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.225 [INFO][4290] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.226 [INFO][4290] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.229 [INFO][4290] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.229 [INFO][4290] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" host="localhost" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.230 [INFO][4290] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1 May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.235 [INFO][4290] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" host="localhost" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.243 [INFO][4290] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" host="localhost" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.243 [INFO][4290] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" host="localhost" May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.243 [INFO][4290] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:11:01.268126 containerd[1526]: 2025-05-16 16:11:01.243 [INFO][4290] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" HandleID="k8s-pod-network.39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" Workload="localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0" May 16 16:11:01.268679 containerd[1526]: 2025-05-16 16:11:01.246 [INFO][4255] cni-plugin/k8s.go 418: Populated endpoint ContainerID="39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" Namespace="calico-system" Pod="calico-kube-controllers-85cf998847-g6kxt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0", GenerateName:"calico-kube-controllers-85cf998847-", Namespace:"calico-system", SelfLink:"", UID:"895b517c-3cb8-4dbd-b16c-6cd9530117dd", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cf998847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-85cf998847-g6kxt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali255fa01a1d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:01.268679 containerd[1526]: 2025-05-16 16:11:01.246 [INFO][4255] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" Namespace="calico-system" Pod="calico-kube-controllers-85cf998847-g6kxt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0" May 16 16:11:01.268679 containerd[1526]: 2025-05-16 16:11:01.246 [INFO][4255] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali255fa01a1d7 ContainerID="39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" Namespace="calico-system" Pod="calico-kube-controllers-85cf998847-g6kxt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0" May 16 16:11:01.268679 containerd[1526]: 2025-05-16 16:11:01.250 [INFO][4255] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" Namespace="calico-system" Pod="calico-kube-controllers-85cf998847-g6kxt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0" May 16 16:11:01.268679 containerd[1526]: 2025-05-16 16:11:01.251 [INFO][4255] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" Namespace="calico-system" Pod="calico-kube-controllers-85cf998847-g6kxt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0", GenerateName:"calico-kube-controllers-85cf998847-", Namespace:"calico-system", SelfLink:"", UID:"895b517c-3cb8-4dbd-b16c-6cd9530117dd", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85cf998847", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1", Pod:"calico-kube-controllers-85cf998847-g6kxt", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali255fa01a1d7", MAC:"e6:4b:aa:fd:a3:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:01.268679 containerd[1526]: 2025-05-16 16:11:01.262 [INFO][4255] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" Namespace="calico-system" Pod="calico-kube-controllers-85cf998847-g6kxt" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--85cf998847--g6kxt-eth0" May 16 16:11:01.271043 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:11:01.292136 containerd[1526]: time="2025-05-16T16:11:01.292094281Z" level=info msg="connecting to shim 39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1" address="unix:///run/containerd/s/e231d195ebfdbc1a4c275804557f7196ceffc7895e85f58f19d2ac15c55b34d1" namespace=k8s.io protocol=ttrpc version=3 May 16 16:11:01.317090 systemd[1]: Started cri-containerd-39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1.scope - libcontainer container 39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1. May 16 16:11:01.321187 containerd[1526]: time="2025-05-16T16:11:01.321149374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5b84t,Uid:5e264e0c-96bd-4ee4-af75-118440e86fe2,Namespace:kube-system,Attempt:0,} returns sandbox id \"92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7\"" May 16 16:11:01.322453 kubelet[2672]: E0516 16:11:01.322422 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:01.331933 containerd[1526]: time="2025-05-16T16:11:01.331865018Z" level=info msg="CreateContainer within sandbox \"92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:11:01.341856 containerd[1526]: time="2025-05-16T16:11:01.341813743Z" level=info msg="Container 92a7f1c8ab6e2822995e4d05ace3184b6e45229afd6dab859c3f02f821b70309: CDI devices from CRI Config.CDIDevices: []" May 16 16:11:01.343051 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:11:01.351631 containerd[1526]: time="2025-05-16T16:11:01.351497347Z" level=info msg="CreateContainer within sandbox \"92d08d71cc0b68b2508e4bc8dd93259d4978baa7d2f4a52784f30f1aa55e80a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"92a7f1c8ab6e2822995e4d05ace3184b6e45229afd6dab859c3f02f821b70309\"" May 16 16:11:01.354913 containerd[1526]: time="2025-05-16T16:11:01.354843988Z" level=info msg="StartContainer for \"92a7f1c8ab6e2822995e4d05ace3184b6e45229afd6dab859c3f02f821b70309\"" May 16 16:11:01.356597 containerd[1526]: time="2025-05-16T16:11:01.356564589Z" level=info msg="connecting to shim 92a7f1c8ab6e2822995e4d05ace3184b6e45229afd6dab859c3f02f821b70309" address="unix:///run/containerd/s/e27eccbfc162815fb6eda6c863383e7058c11ed7a07a791573d9b665f3d1d46f" protocol=ttrpc version=3 May 16 16:11:01.370186 containerd[1526]: time="2025-05-16T16:11:01.370111395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85cf998847-g6kxt,Uid:895b517c-3cb8-4dbd-b16c-6cd9530117dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1\"" May 16 16:11:01.388085 systemd[1]: Started cri-containerd-92a7f1c8ab6e2822995e4d05ace3184b6e45229afd6dab859c3f02f821b70309.scope - libcontainer container 92a7f1c8ab6e2822995e4d05ace3184b6e45229afd6dab859c3f02f821b70309. May 16 16:11:01.426738 containerd[1526]: time="2025-05-16T16:11:01.426698859Z" level=info msg="StartContainer for \"92a7f1c8ab6e2822995e4d05ace3184b6e45229afd6dab859c3f02f821b70309\" returns successfully" May 16 16:11:01.546085 containerd[1526]: time="2025-05-16T16:11:01.545973390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 16 16:11:01.551052 containerd[1526]: time="2025-05-16T16:11:01.550986553Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 2.188224327s" May 16 16:11:01.551217 containerd[1526]: time="2025-05-16T16:11:01.551177713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 16 16:11:01.552692 containerd[1526]: time="2025-05-16T16:11:01.552648113Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 16 16:11:01.553139 containerd[1526]: time="2025-05-16T16:11:01.553022833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:01.553659 containerd[1526]: time="2025-05-16T16:11:01.553627314Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:01.554743 containerd[1526]: time="2025-05-16T16:11:01.554654754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:01.554985 containerd[1526]: time="2025-05-16T16:11:01.554921754Z" level=info msg="CreateContainer within sandbox \"14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 16:11:01.564906 containerd[1526]: time="2025-05-16T16:11:01.563843358Z" level=info msg="Container 96031e4e63ae8bfce2629a99d4eaa21a886f63409ae2bce1dfd2f7c2d6ea9fa9: CDI devices from CRI Config.CDIDevices: []" May 16 16:11:01.570988 containerd[1526]: time="2025-05-16T16:11:01.570948921Z" level=info msg="CreateContainer within sandbox \"14bdf4c4329dab2c0101f0886a2652839f87aee464b984784e159467da716dd8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"96031e4e63ae8bfce2629a99d4eaa21a886f63409ae2bce1dfd2f7c2d6ea9fa9\"" May 16 16:11:01.571545 containerd[1526]: time="2025-05-16T16:11:01.571522241Z" level=info msg="StartContainer for \"96031e4e63ae8bfce2629a99d4eaa21a886f63409ae2bce1dfd2f7c2d6ea9fa9\"" May 16 16:11:01.573251 containerd[1526]: time="2025-05-16T16:11:01.573221122Z" level=info msg="connecting to shim 96031e4e63ae8bfce2629a99d4eaa21a886f63409ae2bce1dfd2f7c2d6ea9fa9" address="unix:///run/containerd/s/879f3369db8763b08a164b7390a308fc9480a669ab7433f06fa6b1682c474bf0" protocol=ttrpc version=3 May 16 16:11:01.593085 systemd[1]: Started cri-containerd-96031e4e63ae8bfce2629a99d4eaa21a886f63409ae2bce1dfd2f7c2d6ea9fa9.scope - libcontainer container 96031e4e63ae8bfce2629a99d4eaa21a886f63409ae2bce1dfd2f7c2d6ea9fa9. May 16 16:11:01.668334 containerd[1526]: time="2025-05-16T16:11:01.668296763Z" level=info msg="StartContainer for \"96031e4e63ae8bfce2629a99d4eaa21a886f63409ae2bce1dfd2f7c2d6ea9fa9\" returns successfully" May 16 16:11:01.999841 containerd[1526]: time="2025-05-16T16:11:01.999794865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b5548bff-kp9g5,Uid:f8a0d9c3-5243-4963-a185-76df8ad5a59c,Namespace:calico-apiserver,Attempt:0,}" May 16 16:11:02.000239 containerd[1526]: time="2025-05-16T16:11:02.000060465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlzmx,Uid:d9789950-a309-4163-96c5-d67e446c252b,Namespace:calico-system,Attempt:0,}" May 16 16:11:02.000482 containerd[1526]: time="2025-05-16T16:11:02.000431666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5hgdf,Uid:052dd569-b80c-4bbb-b6f6-acc75ce14539,Namespace:calico-system,Attempt:0,}" May 16 16:11:02.152173 systemd-networkd[1446]: cali70f2cb0edd1: Link UP May 16 16:11:02.152371 systemd-networkd[1446]: cali70f2cb0edd1: Gained carrier May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.056 [INFO][4492] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--vlzmx-eth0 csi-node-driver- calico-system d9789950-a309-4163-96c5-d67e446c252b 673 0 2025-05-16 16:10:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-vlzmx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali70f2cb0edd1 [] [] }} ContainerID="27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" Namespace="calico-system" Pod="csi-node-driver-vlzmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlzmx-" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.056 [INFO][4492] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" Namespace="calico-system" Pod="csi-node-driver-vlzmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlzmx-eth0" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.099 [INFO][4527] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" HandleID="k8s-pod-network.27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" Workload="localhost-k8s-csi--node--driver--vlzmx-eth0" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.099 [INFO][4527] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" HandleID="k8s-pod-network.27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" Workload="localhost-k8s-csi--node--driver--vlzmx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e1620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-vlzmx", "timestamp":"2025-05-16 16:11:02.099465945 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.099 [INFO][4527] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.099 [INFO][4527] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.100 [INFO][4527] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.113 [INFO][4527] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" host="localhost" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.120 [INFO][4527] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.127 [INFO][4527] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.128 [INFO][4527] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.131 [INFO][4527] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.131 [INFO][4527] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" host="localhost" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.132 [INFO][4527] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.136 [INFO][4527] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" host="localhost" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.141 [INFO][4527] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" host="localhost" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.141 [INFO][4527] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" host="localhost" May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.141 [INFO][4527] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:11:02.174076 containerd[1526]: 2025-05-16 16:11:02.141 [INFO][4527] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" HandleID="k8s-pod-network.27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" Workload="localhost-k8s-csi--node--driver--vlzmx-eth0" May 16 16:11:02.174841 containerd[1526]: 2025-05-16 16:11:02.147 [INFO][4492] cni-plugin/k8s.go 418: Populated endpoint ContainerID="27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" Namespace="calico-system" Pod="csi-node-driver-vlzmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlzmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vlzmx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9789950-a309-4163-96c5-d67e446c252b", ResourceVersion:"673", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-vlzmx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70f2cb0edd1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:02.174841 containerd[1526]: 2025-05-16 16:11:02.147 [INFO][4492] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" Namespace="calico-system" Pod="csi-node-driver-vlzmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlzmx-eth0" May 16 16:11:02.174841 containerd[1526]: 2025-05-16 16:11:02.147 [INFO][4492] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70f2cb0edd1 ContainerID="27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" Namespace="calico-system" Pod="csi-node-driver-vlzmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlzmx-eth0" May 16 16:11:02.174841 containerd[1526]: 2025-05-16 16:11:02.153 [INFO][4492] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" Namespace="calico-system" Pod="csi-node-driver-vlzmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlzmx-eth0" May 16 16:11:02.174841 containerd[1526]: 2025-05-16 16:11:02.153 [INFO][4492] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" Namespace="calico-system" Pod="csi-node-driver-vlzmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlzmx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--vlzmx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9789950-a309-4163-96c5-d67e446c252b", ResourceVersion:"673", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e", Pod:"csi-node-driver-vlzmx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali70f2cb0edd1", MAC:"96:ce:46:9c:39:50", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:02.174841 containerd[1526]: 2025-05-16 16:11:02.169 [INFO][4492] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" Namespace="calico-system" Pod="csi-node-driver-vlzmx" WorkloadEndpoint="localhost-k8s-csi--node--driver--vlzmx-eth0" May 16 16:11:02.232472 kubelet[2672]: E0516 16:11:02.232430 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:02.243237 containerd[1526]: time="2025-05-16T16:11:02.242981803Z" level=info msg="connecting to shim 27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e" address="unix:///run/containerd/s/51d5599a03943c19b5610461e29fcfce14d8052d9a0b6fc5aa46372cefe00f19" namespace=k8s.io protocol=ttrpc version=3 May 16 16:11:02.257703 kubelet[2672]: I0516 16:11:02.256329 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58b5548bff-6l5gn" podStartSLOduration=25.066151882 podStartE2EDuration="27.256311649s" podCreationTimestamp="2025-05-16 16:10:35 +0000 UTC" firstStartedPulling="2025-05-16 16:10:59.362343906 +0000 UTC m=+40.444240304" lastFinishedPulling="2025-05-16 16:11:01.552503713 +0000 UTC m=+42.634400071" observedRunningTime="2025-05-16 16:11:02.252549127 +0000 UTC m=+43.334445525" watchObservedRunningTime="2025-05-16 16:11:02.256311649 +0000 UTC m=+43.338208047" May 16 16:11:02.267150 systemd-networkd[1446]: cali255fa01a1d7: Gained IPv6LL May 16 16:11:02.269793 kubelet[2672]: I0516 16:11:02.269726 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5b84t" podStartSLOduration=37.269706814 podStartE2EDuration="37.269706814s" podCreationTimestamp="2025-05-16 16:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:11:02.266967453 +0000 UTC m=+43.348863851" watchObservedRunningTime="2025-05-16 16:11:02.269706814 +0000 UTC m=+43.351603212" May 16 16:11:02.287252 systemd-networkd[1446]: cali3bedb9d50d3: Link UP May 16 16:11:02.287926 systemd-networkd[1446]: cali3bedb9d50d3: Gained carrier May 16 16:11:02.292013 systemd[1]: Started cri-containerd-27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e.scope - libcontainer container 27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e. May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.062 [INFO][4484] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0 calico-apiserver-58b5548bff- calico-apiserver f8a0d9c3-5243-4963-a185-76df8ad5a59c 816 0 2025-05-16 16:10:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58b5548bff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-58b5548bff-kp9g5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3bedb9d50d3 [] [] }} ContainerID="c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-kp9g5" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--kp9g5-" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.062 [INFO][4484] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-kp9g5" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.105 [INFO][4529] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" HandleID="k8s-pod-network.c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" Workload="localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.105 [INFO][4529] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" HandleID="k8s-pod-network.c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" Workload="localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000320720), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-58b5548bff-kp9g5", "timestamp":"2025-05-16 16:11:02.105171108 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.105 [INFO][4529] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.141 [INFO][4529] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.141 [INFO][4529] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.215 [INFO][4529] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" host="localhost" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.223 [INFO][4529] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.234 [INFO][4529] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.238 [INFO][4529] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.244 [INFO][4529] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.244 [INFO][4529] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" host="localhost" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.250 [INFO][4529] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426 May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.262 [INFO][4529] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" host="localhost" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.277 [INFO][4529] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" host="localhost" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.277 [INFO][4529] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" host="localhost" May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.277 [INFO][4529] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:11:02.310331 containerd[1526]: 2025-05-16 16:11:02.277 [INFO][4529] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" HandleID="k8s-pod-network.c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" Workload="localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0" May 16 16:11:02.311462 containerd[1526]: 2025-05-16 16:11:02.285 [INFO][4484] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-kp9g5" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0", GenerateName:"calico-apiserver-58b5548bff-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8a0d9c3-5243-4963-a185-76df8ad5a59c", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b5548bff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-58b5548bff-kp9g5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bedb9d50d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:02.311462 containerd[1526]: 2025-05-16 16:11:02.285 [INFO][4484] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-kp9g5" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0" May 16 16:11:02.311462 containerd[1526]: 2025-05-16 16:11:02.285 [INFO][4484] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3bedb9d50d3 ContainerID="c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-kp9g5" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0" May 16 16:11:02.311462 containerd[1526]: 2025-05-16 16:11:02.288 [INFO][4484] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-kp9g5" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0" May 16 16:11:02.311462 containerd[1526]: 2025-05-16 16:11:02.288 [INFO][4484] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-kp9g5" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0", GenerateName:"calico-apiserver-58b5548bff-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8a0d9c3-5243-4963-a185-76df8ad5a59c", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58b5548bff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426", Pod:"calico-apiserver-58b5548bff-kp9g5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3bedb9d50d3", MAC:"e2:15:2c:c6:fc:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:02.311462 containerd[1526]: 2025-05-16 16:11:02.302 [INFO][4484] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" Namespace="calico-apiserver" Pod="calico-apiserver-58b5548bff-kp9g5" WorkloadEndpoint="localhost-k8s-calico--apiserver--58b5548bff--kp9g5-eth0" May 16 16:11:02.315920 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:11:02.341023 containerd[1526]: time="2025-05-16T16:11:02.340723443Z" level=info msg="connecting to shim c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426" address="unix:///run/containerd/s/e2f91bf568b033dc5890e18c98fdaacdf88386ca17a3a1e0284c036f9a26235b" namespace=k8s.io protocol=ttrpc version=3 May 16 16:11:02.370034 systemd[1]: Started cri-containerd-c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426.scope - libcontainer container c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426. May 16 16:11:02.374085 systemd-networkd[1446]: cali04c2ee9bacc: Link UP May 16 16:11:02.374592 systemd-networkd[1446]: cali04c2ee9bacc: Gained carrier May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.078 [INFO][4504] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0 goldmane-78d55f7ddc- calico-system 052dd569-b80c-4bbb-b6f6-acc75ce14539 813 0 2025-05-16 16:10:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-5hgdf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali04c2ee9bacc [] [] }} ContainerID="c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5hgdf" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5hgdf-" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.078 [INFO][4504] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5hgdf" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.124 [INFO][4542] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" HandleID="k8s-pod-network.c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" Workload="localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.124 [INFO][4542] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" HandleID="k8s-pod-network.c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" Workload="localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001365b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-5hgdf", "timestamp":"2025-05-16 16:11:02.123823075 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.124 [INFO][4542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.277 [INFO][4542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.277 [INFO][4542] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.315 [INFO][4542] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" host="localhost" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.323 [INFO][4542] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.338 [INFO][4542] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.341 [INFO][4542] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.345 [INFO][4542] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.345 [INFO][4542] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" host="localhost" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.348 [INFO][4542] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5 May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.355 [INFO][4542] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" host="localhost" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.364 [INFO][4542] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" host="localhost" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.364 [INFO][4542] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" host="localhost" May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.364 [INFO][4542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:11:02.393228 containerd[1526]: 2025-05-16 16:11:02.364 [INFO][4542] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" HandleID="k8s-pod-network.c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" Workload="localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0" May 16 16:11:02.394286 containerd[1526]: 2025-05-16 16:11:02.369 [INFO][4504] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5hgdf" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"052dd569-b80c-4bbb-b6f6-acc75ce14539", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-5hgdf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali04c2ee9bacc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:02.394286 containerd[1526]: 2025-05-16 16:11:02.369 [INFO][4504] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5hgdf" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0" May 16 16:11:02.394286 containerd[1526]: 2025-05-16 16:11:02.370 [INFO][4504] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04c2ee9bacc ContainerID="c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5hgdf" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0" May 16 16:11:02.394286 containerd[1526]: 2025-05-16 16:11:02.375 [INFO][4504] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5hgdf" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0" May 16 16:11:02.394286 containerd[1526]: 2025-05-16 16:11:02.377 [INFO][4504] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5hgdf" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"052dd569-b80c-4bbb-b6f6-acc75ce14539", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5", Pod:"goldmane-78d55f7ddc-5hgdf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali04c2ee9bacc", MAC:"16:86:21:05:f4:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:02.394286 containerd[1526]: 2025-05-16 16:11:02.390 [INFO][4504] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" Namespace="calico-system" Pod="goldmane-78d55f7ddc-5hgdf" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--5hgdf-eth0" May 16 16:11:02.394439 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:11:02.398852 containerd[1526]: time="2025-05-16T16:11:02.398816386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-vlzmx,Uid:d9789950-a309-4163-96c5-d67e446c252b,Namespace:calico-system,Attempt:0,} returns sandbox id \"27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e\"" May 16 16:11:02.431043 containerd[1526]: time="2025-05-16T16:11:02.430999039Z" level=info msg="connecting to shim c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5" address="unix:///run/containerd/s/973a0ee0d29907fab65d8d97a943844ba4cbb4c02a85bd1dc5dcda02f9fc14b0" namespace=k8s.io protocol=ttrpc version=3 May 16 16:11:02.455304 containerd[1526]: time="2025-05-16T16:11:02.455266609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58b5548bff-kp9g5,Uid:f8a0d9c3-5243-4963-a185-76df8ad5a59c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426\"" May 16 16:11:02.458691 containerd[1526]: time="2025-05-16T16:11:02.458659930Z" level=info msg="CreateContainer within sandbox \"c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 16:11:02.460052 systemd[1]: Started cri-containerd-c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5.scope - libcontainer container c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5. May 16 16:11:02.466075 containerd[1526]: time="2025-05-16T16:11:02.466043853Z" level=info msg="Container 40acbf8937522af3f3d7887393938c22246afb0db015d686eb431ad0dbfa3334: CDI devices from CRI Config.CDIDevices: []" May 16 16:11:02.474411 containerd[1526]: time="2025-05-16T16:11:02.474378376Z" level=info msg="CreateContainer within sandbox \"c71fb6614f84e65a6656f148d08390fe42d64ca467fde11db0795e349680c426\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"40acbf8937522af3f3d7887393938c22246afb0db015d686eb431ad0dbfa3334\"" May 16 16:11:02.478576 containerd[1526]: time="2025-05-16T16:11:02.478523818Z" level=info msg="StartContainer for \"40acbf8937522af3f3d7887393938c22246afb0db015d686eb431ad0dbfa3334\"" May 16 16:11:02.479286 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:11:02.483696 containerd[1526]: time="2025-05-16T16:11:02.483653900Z" level=info msg="connecting to shim 40acbf8937522af3f3d7887393938c22246afb0db015d686eb431ad0dbfa3334" address="unix:///run/containerd/s/e2f91bf568b033dc5890e18c98fdaacdf88386ca17a3a1e0284c036f9a26235b" protocol=ttrpc version=3 May 16 16:11:02.507097 systemd[1]: Started cri-containerd-40acbf8937522af3f3d7887393938c22246afb0db015d686eb431ad0dbfa3334.scope - libcontainer container 40acbf8937522af3f3d7887393938c22246afb0db015d686eb431ad0dbfa3334. May 16 16:11:02.509939 containerd[1526]: time="2025-05-16T16:11:02.509832271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-5hgdf,Uid:052dd569-b80c-4bbb-b6f6-acc75ce14539,Namespace:calico-system,Attempt:0,} returns sandbox id \"c0d2fefa53abf70ea9b2d5a70d1be393e5c17ff26b1ea18c576824d1aaaeaef5\"" May 16 16:11:02.545066 containerd[1526]: time="2025-05-16T16:11:02.545030925Z" level=info msg="StartContainer for \"40acbf8937522af3f3d7887393938c22246afb0db015d686eb431ad0dbfa3334\" returns successfully" May 16 16:11:02.651046 systemd-networkd[1446]: calic4fb829381a: Gained IPv6LL May 16 16:11:02.999149 kubelet[2672]: E0516 16:11:02.998989 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:03.004149 containerd[1526]: time="2025-05-16T16:11:03.002137109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4g676,Uid:9b3f85d7-7d18-49d6-8a32-85309c91c6cf,Namespace:kube-system,Attempt:0,}" May 16 16:11:03.159803 systemd-networkd[1446]: calia6bc7a3b58a: Link UP May 16 16:11:03.160147 systemd-networkd[1446]: calia6bc7a3b58a: Gained carrier May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.063 [INFO][4761] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--4g676-eth0 coredns-668d6bf9bc- kube-system 9b3f85d7-7d18-49d6-8a32-85309c91c6cf 810 0 2025-05-16 16:10:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-4g676 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6bc7a3b58a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4g676" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4g676-" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.063 [INFO][4761] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4g676" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4g676-eth0" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.094 [INFO][4775] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" HandleID="k8s-pod-network.c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" Workload="localhost-k8s-coredns--668d6bf9bc--4g676-eth0" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.094 [INFO][4775] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" HandleID="k8s-pod-network.c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" Workload="localhost-k8s-coredns--668d6bf9bc--4g676-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003214c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-4g676", "timestamp":"2025-05-16 16:11:03.094656944 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.095 [INFO][4775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.095 [INFO][4775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.095 [INFO][4775] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.108 [INFO][4775] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" host="localhost" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.116 [INFO][4775] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.124 [INFO][4775] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.128 [INFO][4775] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.131 [INFO][4775] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.131 [INFO][4775] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" host="localhost" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.134 [INFO][4775] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3 May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.140 [INFO][4775] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" host="localhost" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.149 [INFO][4775] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" host="localhost" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.150 [INFO][4775] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" host="localhost" May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.150 [INFO][4775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 16:11:03.183103 containerd[1526]: 2025-05-16 16:11:03.150 [INFO][4775] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" HandleID="k8s-pod-network.c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" Workload="localhost-k8s-coredns--668d6bf9bc--4g676-eth0" May 16 16:11:03.184452 containerd[1526]: 2025-05-16 16:11:03.156 [INFO][4761] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4g676" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4g676-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4g676-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9b3f85d7-7d18-49d6-8a32-85309c91c6cf", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-4g676", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6bc7a3b58a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:03.184452 containerd[1526]: 2025-05-16 16:11:03.157 [INFO][4761] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4g676" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4g676-eth0" May 16 16:11:03.184452 containerd[1526]: 2025-05-16 16:11:03.157 [INFO][4761] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6bc7a3b58a ContainerID="c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4g676" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4g676-eth0" May 16 16:11:03.184452 containerd[1526]: 2025-05-16 16:11:03.160 [INFO][4761] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4g676" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4g676-eth0" May 16 16:11:03.184452 containerd[1526]: 2025-05-16 16:11:03.166 [INFO][4761] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4g676" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4g676-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--4g676-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9b3f85d7-7d18-49d6-8a32-85309c91c6cf", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 16, 10, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3", Pod:"coredns-668d6bf9bc-4g676", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6bc7a3b58a", MAC:"ea:48:11:0d:99:82", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 16:11:03.184452 containerd[1526]: 2025-05-16 16:11:03.178 [INFO][4761] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" Namespace="kube-system" Pod="coredns-668d6bf9bc-4g676" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--4g676-eth0" May 16 16:11:03.219172 containerd[1526]: time="2025-05-16T16:11:03.219130871Z" level=info msg="connecting to shim c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3" address="unix:///run/containerd/s/5cf303c9042ecc58af86a809dbea67134d11160e87a9273cc18920e66e2a7add" namespace=k8s.io protocol=ttrpc version=3 May 16 16:11:03.239974 kubelet[2672]: E0516 16:11:03.239939 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:03.248492 kubelet[2672]: I0516 16:11:03.248445 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:11:03.257622 kubelet[2672]: I0516 16:11:03.257499 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-58b5548bff-kp9g5" podStartSLOduration=29.257480645 podStartE2EDuration="29.257480645s" podCreationTimestamp="2025-05-16 16:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:11:03.254470204 +0000 UTC m=+44.336366602" watchObservedRunningTime="2025-05-16 16:11:03.257480645 +0000 UTC m=+44.339377043" May 16 16:11:03.274122 systemd[1]: Started cri-containerd-c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3.scope - libcontainer container c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3. May 16 16:11:03.300217 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 16:11:03.328402 containerd[1526]: time="2025-05-16T16:11:03.328249072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4g676,Uid:9b3f85d7-7d18-49d6-8a32-85309c91c6cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3\"" May 16 16:11:03.329760 kubelet[2672]: E0516 16:11:03.329733 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:03.334642 containerd[1526]: time="2025-05-16T16:11:03.334348114Z" level=info msg="CreateContainer within sandbox \"c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 16:11:03.351454 containerd[1526]: time="2025-05-16T16:11:03.351406961Z" level=info msg="Container fe7b3941cab288cb5b90f76f3bc1e8b0cf61417108575f60d766bde21a8b00bc: CDI devices from CRI Config.CDIDevices: []" May 16 16:11:03.362577 containerd[1526]: time="2025-05-16T16:11:03.362540525Z" level=info msg="CreateContainer within sandbox \"c48797ad73911e140693136cb94ab25bb8fd2253299761a68bcf878b22ecdbb3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fe7b3941cab288cb5b90f76f3bc1e8b0cf61417108575f60d766bde21a8b00bc\"" May 16 16:11:03.363583 containerd[1526]: time="2025-05-16T16:11:03.363547485Z" level=info msg="StartContainer for \"fe7b3941cab288cb5b90f76f3bc1e8b0cf61417108575f60d766bde21a8b00bc\"" May 16 16:11:03.365282 containerd[1526]: time="2025-05-16T16:11:03.365242206Z" level=info msg="connecting to shim fe7b3941cab288cb5b90f76f3bc1e8b0cf61417108575f60d766bde21a8b00bc" address="unix:///run/containerd/s/5cf303c9042ecc58af86a809dbea67134d11160e87a9273cc18920e66e2a7add" protocol=ttrpc version=3 May 16 16:11:03.402354 systemd[1]: Started cri-containerd-fe7b3941cab288cb5b90f76f3bc1e8b0cf61417108575f60d766bde21a8b00bc.scope - libcontainer container fe7b3941cab288cb5b90f76f3bc1e8b0cf61417108575f60d766bde21a8b00bc. May 16 16:11:03.418990 systemd-networkd[1446]: cali70f2cb0edd1: Gained IPv6LL May 16 16:11:03.459756 containerd[1526]: time="2025-05-16T16:11:03.459707362Z" level=info msg="StartContainer for \"fe7b3941cab288cb5b90f76f3bc1e8b0cf61417108575f60d766bde21a8b00bc\" returns successfully" May 16 16:11:03.548027 systemd-networkd[1446]: cali04c2ee9bacc: Gained IPv6LL May 16 16:11:03.906736 containerd[1526]: time="2025-05-16T16:11:03.906691250Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:03.907663 containerd[1526]: time="2025-05-16T16:11:03.907636411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 16 16:11:03.908553 containerd[1526]: time="2025-05-16T16:11:03.908528251Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:03.910517 containerd[1526]: time="2025-05-16T16:11:03.910488732Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:03.911263 containerd[1526]: time="2025-05-16T16:11:03.911229972Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 2.358556099s" May 16 16:11:03.911313 containerd[1526]: time="2025-05-16T16:11:03.911271692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 16 16:11:03.913060 containerd[1526]: time="2025-05-16T16:11:03.913032613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 16 16:11:03.924448 containerd[1526]: time="2025-05-16T16:11:03.924400577Z" level=info msg="CreateContainer within sandbox \"39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 16 16:11:03.930266 containerd[1526]: time="2025-05-16T16:11:03.930216859Z" level=info msg="Container 0f33ee279aabff9d112860270c8e3d4eb1cc41dd54925f04aef96451e4fdbfb2: CDI devices from CRI Config.CDIDevices: []" May 16 16:11:03.937576 containerd[1526]: time="2025-05-16T16:11:03.937532782Z" level=info msg="CreateContainer within sandbox \"39bdd3e521a33972f65e0e7fd38fbc91cdea50cf5679604bd23936f0439ca5c1\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0f33ee279aabff9d112860270c8e3d4eb1cc41dd54925f04aef96451e4fdbfb2\"" May 16 16:11:03.938105 containerd[1526]: time="2025-05-16T16:11:03.938078782Z" level=info msg="StartContainer for \"0f33ee279aabff9d112860270c8e3d4eb1cc41dd54925f04aef96451e4fdbfb2\"" May 16 16:11:03.940533 containerd[1526]: time="2025-05-16T16:11:03.940498863Z" level=info msg="connecting to shim 0f33ee279aabff9d112860270c8e3d4eb1cc41dd54925f04aef96451e4fdbfb2" address="unix:///run/containerd/s/e231d195ebfdbc1a4c275804557f7196ceffc7895e85f58f19d2ac15c55b34d1" protocol=ttrpc version=3 May 16 16:11:03.968057 systemd[1]: Started cri-containerd-0f33ee279aabff9d112860270c8e3d4eb1cc41dd54925f04aef96451e4fdbfb2.scope - libcontainer container 0f33ee279aabff9d112860270c8e3d4eb1cc41dd54925f04aef96451e4fdbfb2. May 16 16:11:03.995031 systemd-networkd[1446]: cali3bedb9d50d3: Gained IPv6LL May 16 16:11:04.021581 containerd[1526]: time="2025-05-16T16:11:04.021492013Z" level=info msg="StartContainer for \"0f33ee279aabff9d112860270c8e3d4eb1cc41dd54925f04aef96451e4fdbfb2\" returns successfully" May 16 16:11:04.251189 kubelet[2672]: I0516 16:11:04.251075 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:11:04.251189 kubelet[2672]: E0516 16:11:04.251164 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:04.255031 kubelet[2672]: E0516 16:11:04.254940 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:04.262608 kubelet[2672]: I0516 16:11:04.262411 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4g676" podStartSLOduration=39.262394498 podStartE2EDuration="39.262394498s" podCreationTimestamp="2025-05-16 16:10:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 16:11:04.262376138 +0000 UTC m=+45.344272536" watchObservedRunningTime="2025-05-16 16:11:04.262394498 +0000 UTC m=+45.344290896" May 16 16:11:04.294676 kubelet[2672]: I0516 16:11:04.294614 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85cf998847-g6kxt" podStartSLOduration=23.754542174 podStartE2EDuration="26.29459707s" podCreationTimestamp="2025-05-16 16:10:38 +0000 UTC" firstStartedPulling="2025-05-16 16:11:01.372080316 +0000 UTC m=+42.453976674" lastFinishedPulling="2025-05-16 16:11:03.912135172 +0000 UTC m=+44.994031570" observedRunningTime="2025-05-16 16:11:04.293589989 +0000 UTC m=+45.375486387" watchObservedRunningTime="2025-05-16 16:11:04.29459707 +0000 UTC m=+45.376493428" May 16 16:11:04.781904 systemd[1]: Started sshd@10-10.0.0.50:22-10.0.0.1:54034.service - OpenSSH per-connection server daemon (10.0.0.1:54034). May 16 16:11:04.852907 sshd[4926]: Accepted publickey for core from 10.0.0.1 port 54034 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:04.854098 sshd-session[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:04.860611 systemd-logind[1508]: New session 11 of user core. May 16 16:11:04.877100 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 16:11:05.019143 systemd-networkd[1446]: calia6bc7a3b58a: Gained IPv6LL May 16 16:11:05.065661 sshd[4934]: Connection closed by 10.0.0.1 port 54034 May 16 16:11:05.066149 sshd-session[4926]: pam_unix(sshd:session): session closed for user core May 16 16:11:05.069621 systemd[1]: sshd@10-10.0.0.50:22-10.0.0.1:54034.service: Deactivated successfully. May 16 16:11:05.071493 systemd[1]: session-11.scope: Deactivated successfully. May 16 16:11:05.072330 systemd-logind[1508]: Session 11 logged out. Waiting for processes to exit. May 16 16:11:05.074145 systemd-logind[1508]: Removed session 11. May 16 16:11:05.254100 kubelet[2672]: I0516 16:11:05.253967 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:11:05.262125 kubelet[2672]: E0516 16:11:05.262081 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:05.305028 containerd[1526]: time="2025-05-16T16:11:05.304983341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:05.306793 containerd[1526]: time="2025-05-16T16:11:05.306758861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 16 16:11:05.306868 containerd[1526]: time="2025-05-16T16:11:05.306840341Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:05.308574 containerd[1526]: time="2025-05-16T16:11:05.308539982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:05.309795 containerd[1526]: time="2025-05-16T16:11:05.309675902Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 1.396608009s" May 16 16:11:05.309795 containerd[1526]: time="2025-05-16T16:11:05.309707182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 16 16:11:05.311091 containerd[1526]: time="2025-05-16T16:11:05.311067263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 16:11:05.312707 containerd[1526]: time="2025-05-16T16:11:05.312666743Z" level=info msg="CreateContainer within sandbox \"27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 16:11:05.323767 containerd[1526]: time="2025-05-16T16:11:05.322687866Z" level=info msg="Container f109938ffb90b0e11ffb2e6dcb3742e79f503071b25a1d1d928817ce01f54b7e: CDI devices from CRI Config.CDIDevices: []" May 16 16:11:05.336100 containerd[1526]: time="2025-05-16T16:11:05.336065271Z" level=info msg="CreateContainer within sandbox \"27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f109938ffb90b0e11ffb2e6dcb3742e79f503071b25a1d1d928817ce01f54b7e\"" May 16 16:11:05.338108 containerd[1526]: time="2025-05-16T16:11:05.338058912Z" level=info msg="StartContainer for \"f109938ffb90b0e11ffb2e6dcb3742e79f503071b25a1d1d928817ce01f54b7e\"" May 16 16:11:05.339690 containerd[1526]: time="2025-05-16T16:11:05.339664432Z" level=info msg="connecting to shim f109938ffb90b0e11ffb2e6dcb3742e79f503071b25a1d1d928817ce01f54b7e" address="unix:///run/containerd/s/51d5599a03943c19b5610461e29fcfce14d8052d9a0b6fc5aa46372cefe00f19" protocol=ttrpc version=3 May 16 16:11:05.371166 systemd[1]: Started cri-containerd-f109938ffb90b0e11ffb2e6dcb3742e79f503071b25a1d1d928817ce01f54b7e.scope - libcontainer container f109938ffb90b0e11ffb2e6dcb3742e79f503071b25a1d1d928817ce01f54b7e. May 16 16:11:05.419641 containerd[1526]: time="2025-05-16T16:11:05.419588379Z" level=info msg="StartContainer for \"f109938ffb90b0e11ffb2e6dcb3742e79f503071b25a1d1d928817ce01f54b7e\" returns successfully" May 16 16:11:05.441427 containerd[1526]: time="2025-05-16T16:11:05.441361546Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:11:05.442400 containerd[1526]: time="2025-05-16T16:11:05.442281346Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:11:05.442400 containerd[1526]: time="2025-05-16T16:11:05.442323186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 16:11:05.442575 kubelet[2672]: E0516 16:11:05.442496 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:11:05.442575 kubelet[2672]: E0516 16:11:05.442552 2672 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:11:05.443521 containerd[1526]: time="2025-05-16T16:11:05.443396507Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 16 16:11:05.443592 kubelet[2672]: E0516 16:11:05.443353 2672 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8r7lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-5hgdf_calico-system(052dd569-b80c-4bbb-b6f6-acc75ce14539): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:11:05.444988 kubelet[2672]: E0516 16:11:05.444938 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5hgdf" podUID="052dd569-b80c-4bbb-b6f6-acc75ce14539" May 16 16:11:06.258729 kubelet[2672]: E0516 16:11:06.258654 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5hgdf" podUID="052dd569-b80c-4bbb-b6f6-acc75ce14539" May 16 16:11:06.548814 containerd[1526]: time="2025-05-16T16:11:06.548707542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:06.549705 containerd[1526]: time="2025-05-16T16:11:06.549673462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 16 16:11:06.553271 containerd[1526]: time="2025-05-16T16:11:06.553218663Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:06.555939 containerd[1526]: time="2025-05-16T16:11:06.555906584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 16:11:06.556603 containerd[1526]: time="2025-05-16T16:11:06.556402104Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 1.112967437s" May 16 16:11:06.556603 containerd[1526]: time="2025-05-16T16:11:06.556517944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 16 16:11:06.559045 containerd[1526]: time="2025-05-16T16:11:06.558999745Z" level=info msg="CreateContainer within sandbox \"27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 16:11:06.565417 containerd[1526]: time="2025-05-16T16:11:06.565375187Z" level=info msg="Container e776d8db9ea54b389cfcaecfdb4a0ff8bce16a167da9c79b7543a1f8696f8843: CDI devices from CRI Config.CDIDevices: []" May 16 16:11:06.588161 containerd[1526]: time="2025-05-16T16:11:06.588109434Z" level=info msg="CreateContainer within sandbox \"27f68f6d780fe06b60c47f404df2491f240f8d31b2bf38bf82b69c4813640c9e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e776d8db9ea54b389cfcaecfdb4a0ff8bce16a167da9c79b7543a1f8696f8843\"" May 16 16:11:06.589005 containerd[1526]: time="2025-05-16T16:11:06.588973394Z" level=info msg="StartContainer for \"e776d8db9ea54b389cfcaecfdb4a0ff8bce16a167da9c79b7543a1f8696f8843\"" May 16 16:11:06.590543 containerd[1526]: time="2025-05-16T16:11:06.590504115Z" level=info msg="connecting to shim e776d8db9ea54b389cfcaecfdb4a0ff8bce16a167da9c79b7543a1f8696f8843" address="unix:///run/containerd/s/51d5599a03943c19b5610461e29fcfce14d8052d9a0b6fc5aa46372cefe00f19" protocol=ttrpc version=3 May 16 16:11:06.622092 systemd[1]: Started cri-containerd-e776d8db9ea54b389cfcaecfdb4a0ff8bce16a167da9c79b7543a1f8696f8843.scope - libcontainer container e776d8db9ea54b389cfcaecfdb4a0ff8bce16a167da9c79b7543a1f8696f8843. May 16 16:11:06.683403 containerd[1526]: time="2025-05-16T16:11:06.683359664Z" level=info msg="StartContainer for \"e776d8db9ea54b389cfcaecfdb4a0ff8bce16a167da9c79b7543a1f8696f8843\" returns successfully" May 16 16:11:06.999475 containerd[1526]: time="2025-05-16T16:11:06.999442482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 16:11:07.080820 kubelet[2672]: I0516 16:11:07.080779 2672 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 16:11:07.080820 kubelet[2672]: I0516 16:11:07.080819 2672 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 16:11:07.181762 containerd[1526]: time="2025-05-16T16:11:07.181567015Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:11:07.182660 containerd[1526]: time="2025-05-16T16:11:07.182488855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:11:07.182660 containerd[1526]: time="2025-05-16T16:11:07.182556655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 16:11:07.182752 kubelet[2672]: E0516 16:11:07.182699 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:11:07.182752 kubelet[2672]: E0516 16:11:07.182744 2672 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:11:07.183068 kubelet[2672]: E0516 16:11:07.182875 2672 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:88499e8d6d8d40fe9d65d26d7f0a4b23,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcjv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-744466594d-5kjch_calico-system(72e24a37-c2d8-434b-8faf-2f5d0bed4d82): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:11:07.185739 containerd[1526]: time="2025-05-16T16:11:07.185529216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 16:11:07.278798 kubelet[2672]: I0516 16:11:07.278369 2672 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-vlzmx" podStartSLOduration=25.122118645 podStartE2EDuration="29.278351763s" podCreationTimestamp="2025-05-16 16:10:38 +0000 UTC" firstStartedPulling="2025-05-16 16:11:02.401102427 +0000 UTC m=+43.482998785" lastFinishedPulling="2025-05-16 16:11:06.557335545 +0000 UTC m=+47.639231903" observedRunningTime="2025-05-16 16:11:07.278219323 +0000 UTC m=+48.360115721" watchObservedRunningTime="2025-05-16 16:11:07.278351763 +0000 UTC m=+48.360248161" May 16 16:11:07.385759 containerd[1526]: time="2025-05-16T16:11:07.385710675Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:11:07.391859 containerd[1526]: time="2025-05-16T16:11:07.391816956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:11:07.392094 containerd[1526]: time="2025-05-16T16:11:07.391891076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 16:11:07.392301 kubelet[2672]: E0516 16:11:07.392230 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:11:07.392385 kubelet[2672]: E0516 16:11:07.392306 2672 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:11:07.392790 kubelet[2672]: E0516 16:11:07.392428 2672 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bcjv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-744466594d-5kjch_calico-system(72e24a37-c2d8-434b-8faf-2f5d0bed4d82): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:11:07.393967 kubelet[2672]: E0516 16:11:07.393929 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-744466594d-5kjch" podUID="72e24a37-c2d8-434b-8faf-2f5d0bed4d82" May 16 16:11:09.165596 kubelet[2672]: I0516 16:11:09.165179 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:11:09.210134 containerd[1526]: time="2025-05-16T16:11:09.210100821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f33ee279aabff9d112860270c8e3d4eb1cc41dd54925f04aef96451e4fdbfb2\" id:\"2d53c5c3d7a84feb84c45689bb292079a22c1c1fc9c31dc15880957655656fbd\" pid:5040 exited_at:{seconds:1747411869 nanos:209468781}" May 16 16:11:09.251945 containerd[1526]: time="2025-05-16T16:11:09.251862752Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f33ee279aabff9d112860270c8e3d4eb1cc41dd54925f04aef96451e4fdbfb2\" id:\"0b00919dbdc20d24e278a609150363849bb5adaca4549d231806c5e5f742550f\" pid:5066 exited_at:{seconds:1747411869 nanos:250338591}" May 16 16:11:10.081554 systemd[1]: Started sshd@11-10.0.0.50:22-10.0.0.1:54038.service - OpenSSH per-connection server daemon (10.0.0.1:54038). May 16 16:11:10.151322 sshd[5078]: Accepted publickey for core from 10.0.0.1 port 54038 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:10.152806 sshd-session[5078]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:10.156971 systemd-logind[1508]: New session 12 of user core. May 16 16:11:10.164061 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 16:11:10.346748 sshd[5080]: Connection closed by 10.0.0.1 port 54038 May 16 16:11:10.348070 sshd-session[5078]: pam_unix(sshd:session): session closed for user core May 16 16:11:10.356040 systemd[1]: sshd@11-10.0.0.50:22-10.0.0.1:54038.service: Deactivated successfully. May 16 16:11:10.357716 systemd[1]: session-12.scope: Deactivated successfully. May 16 16:11:10.358514 systemd-logind[1508]: Session 12 logged out. Waiting for processes to exit. May 16 16:11:10.361295 systemd[1]: Started sshd@12-10.0.0.50:22-10.0.0.1:54040.service - OpenSSH per-connection server daemon (10.0.0.1:54040). May 16 16:11:10.361905 systemd-logind[1508]: Removed session 12. May 16 16:11:10.413471 sshd[5094]: Accepted publickey for core from 10.0.0.1 port 54040 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:10.414711 sshd-session[5094]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:10.422169 systemd-logind[1508]: New session 13 of user core. May 16 16:11:10.434078 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 16:11:10.618976 sshd[5096]: Connection closed by 10.0.0.1 port 54040 May 16 16:11:10.619009 sshd-session[5094]: pam_unix(sshd:session): session closed for user core May 16 16:11:10.631595 systemd[1]: sshd@12-10.0.0.50:22-10.0.0.1:54040.service: Deactivated successfully. May 16 16:11:10.634592 systemd[1]: session-13.scope: Deactivated successfully. May 16 16:11:10.636069 systemd-logind[1508]: Session 13 logged out. Waiting for processes to exit. May 16 16:11:10.638257 systemd[1]: Started sshd@13-10.0.0.50:22-10.0.0.1:54048.service - OpenSSH per-connection server daemon (10.0.0.1:54048). May 16 16:11:10.641636 systemd-logind[1508]: Removed session 13. May 16 16:11:10.685085 sshd[5107]: Accepted publickey for core from 10.0.0.1 port 54048 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:10.686368 sshd-session[5107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:10.690235 systemd-logind[1508]: New session 14 of user core. May 16 16:11:10.698025 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 16:11:10.883228 sshd[5109]: Connection closed by 10.0.0.1 port 54048 May 16 16:11:10.883136 sshd-session[5107]: pam_unix(sshd:session): session closed for user core May 16 16:11:10.887795 systemd[1]: sshd@13-10.0.0.50:22-10.0.0.1:54048.service: Deactivated successfully. May 16 16:11:10.890252 systemd[1]: session-14.scope: Deactivated successfully. May 16 16:11:10.891484 systemd-logind[1508]: Session 14 logged out. Waiting for processes to exit. May 16 16:11:10.893069 systemd-logind[1508]: Removed session 14. May 16 16:11:14.968658 kubelet[2672]: I0516 16:11:14.968551 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:11:15.898205 systemd[1]: Started sshd@14-10.0.0.50:22-10.0.0.1:52752.service - OpenSSH per-connection server daemon (10.0.0.1:52752). May 16 16:11:15.951479 sshd[5139]: Accepted publickey for core from 10.0.0.1 port 52752 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:15.952832 sshd-session[5139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:15.956883 systemd-logind[1508]: New session 15 of user core. May 16 16:11:15.967010 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 16:11:16.143926 sshd[5141]: Connection closed by 10.0.0.1 port 52752 May 16 16:11:16.144205 sshd-session[5139]: pam_unix(sshd:session): session closed for user core May 16 16:11:16.147056 systemd[1]: sshd@14-10.0.0.50:22-10.0.0.1:52752.service: Deactivated successfully. May 16 16:11:16.149160 systemd[1]: session-15.scope: Deactivated successfully. May 16 16:11:16.153323 systemd-logind[1508]: Session 15 logged out. Waiting for processes to exit. May 16 16:11:16.154767 systemd-logind[1508]: Removed session 15. May 16 16:11:18.001956 containerd[1526]: time="2025-05-16T16:11:18.001912974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 16:11:18.198709 containerd[1526]: time="2025-05-16T16:11:18.198641482Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:11:18.199592 containerd[1526]: time="2025-05-16T16:11:18.199559762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:11:18.199704 containerd[1526]: time="2025-05-16T16:11:18.199621642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 16:11:18.199780 kubelet[2672]: E0516 16:11:18.199746 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:11:18.200155 kubelet[2672]: E0516 16:11:18.199792 2672 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:11:18.200155 kubelet[2672]: E0516 16:11:18.199928 2672 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8r7lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-5hgdf_calico-system(052dd569-b80c-4bbb-b6f6-acc75ce14539): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:11:18.201126 kubelet[2672]: E0516 16:11:18.201079 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5hgdf" podUID="052dd569-b80c-4bbb-b6f6-acc75ce14539" May 16 16:11:20.545566 containerd[1526]: time="2025-05-16T16:11:20.545520990Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f33ee279aabff9d112860270c8e3d4eb1cc41dd54925f04aef96451e4fdbfb2\" id:\"1a58dc623e146b5ade4baae8beba039d88f30eed382ff06e6555e708561bc450\" pid:5167 exited_at:{seconds:1747411880 nanos:544830983}" May 16 16:11:21.000556 kubelet[2672]: E0516 16:11:21.000501 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-744466594d-5kjch" podUID="72e24a37-c2d8-434b-8faf-2f5d0bed4d82" May 16 16:11:21.155369 systemd[1]: Started sshd@15-10.0.0.50:22-10.0.0.1:52754.service - OpenSSH per-connection server daemon (10.0.0.1:52754). May 16 16:11:21.198337 sshd[5178]: Accepted publickey for core from 10.0.0.1 port 52754 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:21.199620 sshd-session[5178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:21.204940 systemd-logind[1508]: New session 16 of user core. May 16 16:11:21.212140 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 16:11:21.357952 sshd[5180]: Connection closed by 10.0.0.1 port 52754 May 16 16:11:21.358418 sshd-session[5178]: pam_unix(sshd:session): session closed for user core May 16 16:11:21.362630 systemd[1]: sshd@15-10.0.0.50:22-10.0.0.1:52754.service: Deactivated successfully. May 16 16:11:21.364673 systemd[1]: session-16.scope: Deactivated successfully. May 16 16:11:21.365544 systemd-logind[1508]: Session 16 logged out. Waiting for processes to exit. May 16 16:11:21.366778 systemd-logind[1508]: Removed session 16. May 16 16:11:25.211094 containerd[1526]: time="2025-05-16T16:11:25.211048772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45cc5af06812392e400e673e49dceb8e0e24b2eb5465ca86cafb91c8ca01e30a\" id:\"e9c02fd3cdcf90ccc6d94cf203e68eb5b39419ddbc5a3c65a37d3130aac8916f\" pid:5208 exited_at:{seconds:1747411885 nanos:210668089}" May 16 16:11:26.374797 systemd[1]: Started sshd@16-10.0.0.50:22-10.0.0.1:34788.service - OpenSSH per-connection server daemon (10.0.0.1:34788). May 16 16:11:26.437088 sshd[5224]: Accepted publickey for core from 10.0.0.1 port 34788 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:26.438573 sshd-session[5224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:26.442839 systemd-logind[1508]: New session 17 of user core. May 16 16:11:26.460110 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 16:11:26.634127 sshd[5226]: Connection closed by 10.0.0.1 port 34788 May 16 16:11:26.634671 sshd-session[5224]: pam_unix(sshd:session): session closed for user core May 16 16:11:26.647173 systemd[1]: sshd@16-10.0.0.50:22-10.0.0.1:34788.service: Deactivated successfully. May 16 16:11:26.648965 systemd[1]: session-17.scope: Deactivated successfully. May 16 16:11:26.650389 systemd-logind[1508]: Session 17 logged out. Waiting for processes to exit. May 16 16:11:26.652930 systemd[1]: Started sshd@17-10.0.0.50:22-10.0.0.1:34794.service - OpenSSH per-connection server daemon (10.0.0.1:34794). May 16 16:11:26.654445 systemd-logind[1508]: Removed session 17. May 16 16:11:26.701936 sshd[5240]: Accepted publickey for core from 10.0.0.1 port 34794 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:26.703029 sshd-session[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:26.707874 systemd-logind[1508]: New session 18 of user core. May 16 16:11:26.714046 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 16:11:26.944122 sshd[5242]: Connection closed by 10.0.0.1 port 34794 May 16 16:11:26.944620 sshd-session[5240]: pam_unix(sshd:session): session closed for user core May 16 16:11:26.956530 systemd[1]: sshd@17-10.0.0.50:22-10.0.0.1:34794.service: Deactivated successfully. May 16 16:11:26.959405 systemd[1]: session-18.scope: Deactivated successfully. May 16 16:11:26.960185 systemd-logind[1508]: Session 18 logged out. Waiting for processes to exit. May 16 16:11:26.962902 systemd[1]: Started sshd@18-10.0.0.50:22-10.0.0.1:34806.service - OpenSSH per-connection server daemon (10.0.0.1:34806). May 16 16:11:26.964686 systemd-logind[1508]: Removed session 18. May 16 16:11:27.020580 sshd[5253]: Accepted publickey for core from 10.0.0.1 port 34806 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:27.022516 sshd-session[5253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:27.027935 systemd-logind[1508]: New session 19 of user core. May 16 16:11:27.040080 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 16:11:27.747975 sshd[5255]: Connection closed by 10.0.0.1 port 34806 May 16 16:11:27.749136 sshd-session[5253]: pam_unix(sshd:session): session closed for user core May 16 16:11:27.760551 systemd[1]: sshd@18-10.0.0.50:22-10.0.0.1:34806.service: Deactivated successfully. May 16 16:11:27.764412 systemd[1]: session-19.scope: Deactivated successfully. May 16 16:11:27.766629 systemd-logind[1508]: Session 19 logged out. Waiting for processes to exit. May 16 16:11:27.770672 systemd[1]: Started sshd@19-10.0.0.50:22-10.0.0.1:34810.service - OpenSSH per-connection server daemon (10.0.0.1:34810). May 16 16:11:27.772066 systemd-logind[1508]: Removed session 19. May 16 16:11:27.819596 sshd[5276]: Accepted publickey for core from 10.0.0.1 port 34810 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:27.821189 sshd-session[5276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:27.825596 systemd-logind[1508]: New session 20 of user core. May 16 16:11:27.841038 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 16:11:28.106661 sshd[5278]: Connection closed by 10.0.0.1 port 34810 May 16 16:11:28.107011 sshd-session[5276]: pam_unix(sshd:session): session closed for user core May 16 16:11:28.116609 systemd[1]: sshd@19-10.0.0.50:22-10.0.0.1:34810.service: Deactivated successfully. May 16 16:11:28.118516 systemd[1]: session-20.scope: Deactivated successfully. May 16 16:11:28.119720 systemd-logind[1508]: Session 20 logged out. Waiting for processes to exit. May 16 16:11:28.124082 systemd[1]: Started sshd@20-10.0.0.50:22-10.0.0.1:34818.service - OpenSSH per-connection server daemon (10.0.0.1:34818). May 16 16:11:28.125492 systemd-logind[1508]: Removed session 20. May 16 16:11:28.177578 sshd[5290]: Accepted publickey for core from 10.0.0.1 port 34818 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:28.178898 sshd-session[5290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:28.184105 systemd-logind[1508]: New session 21 of user core. May 16 16:11:28.193072 systemd[1]: Started session-21.scope - Session 21 of User core. May 16 16:11:28.328217 sshd[5292]: Connection closed by 10.0.0.1 port 34818 May 16 16:11:28.328725 sshd-session[5290]: pam_unix(sshd:session): session closed for user core May 16 16:11:28.332206 systemd[1]: sshd@20-10.0.0.50:22-10.0.0.1:34818.service: Deactivated successfully. May 16 16:11:28.334153 systemd[1]: session-21.scope: Deactivated successfully. May 16 16:11:28.334854 systemd-logind[1508]: Session 21 logged out. Waiting for processes to exit. May 16 16:11:28.335863 systemd-logind[1508]: Removed session 21. May 16 16:11:31.000149 kubelet[2672]: E0516 16:11:31.000089 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5hgdf" podUID="052dd569-b80c-4bbb-b6f6-acc75ce14539" May 16 16:11:33.340251 systemd[1]: Started sshd@21-10.0.0.50:22-10.0.0.1:41118.service - OpenSSH per-connection server daemon (10.0.0.1:41118). May 16 16:11:33.391179 sshd[5312]: Accepted publickey for core from 10.0.0.1 port 41118 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:33.392350 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:33.396010 systemd-logind[1508]: New session 22 of user core. May 16 16:11:33.402055 systemd[1]: Started session-22.scope - Session 22 of User core. May 16 16:11:33.539056 sshd[5314]: Connection closed by 10.0.0.1 port 41118 May 16 16:11:33.539385 sshd-session[5312]: pam_unix(sshd:session): session closed for user core May 16 16:11:33.542743 systemd[1]: sshd@21-10.0.0.50:22-10.0.0.1:41118.service: Deactivated successfully. May 16 16:11:33.544528 systemd[1]: session-22.scope: Deactivated successfully. May 16 16:11:33.545184 systemd-logind[1508]: Session 22 logged out. Waiting for processes to exit. May 16 16:11:33.546215 systemd-logind[1508]: Removed session 22. May 16 16:11:34.999481 containerd[1526]: time="2025-05-16T16:11:34.999436980Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 16:11:35.145952 containerd[1526]: time="2025-05-16T16:11:35.145872086Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:11:35.146773 containerd[1526]: time="2025-05-16T16:11:35.146702812Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:11:35.146773 containerd[1526]: time="2025-05-16T16:11:35.146741572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 16:11:35.147218 kubelet[2672]: E0516 16:11:35.146945 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:11:35.147218 kubelet[2672]: E0516 16:11:35.146989 2672 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 16:11:35.147218 kubelet[2672]: E0516 16:11:35.147096 2672 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:88499e8d6d8d40fe9d65d26d7f0a4b23,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bcjv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-744466594d-5kjch_calico-system(72e24a37-c2d8-434b-8faf-2f5d0bed4d82): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:11:35.149930 containerd[1526]: time="2025-05-16T16:11:35.149862873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 16:11:35.301045 containerd[1526]: time="2025-05-16T16:11:35.300934771Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:11:35.302079 containerd[1526]: time="2025-05-16T16:11:35.302020418Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:11:35.302150 containerd[1526]: time="2025-05-16T16:11:35.302082859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 16:11:35.302306 kubelet[2672]: E0516 16:11:35.302266 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:11:35.302369 kubelet[2672]: E0516 16:11:35.302318 2672 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 16:11:35.302464 kubelet[2672]: E0516 16:11:35.302425 2672 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bcjv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-744466594d-5kjch_calico-system(72e24a37-c2d8-434b-8faf-2f5d0bed4d82): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:11:35.303828 kubelet[2672]: E0516 16:11:35.303784 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-744466594d-5kjch" podUID="72e24a37-c2d8-434b-8faf-2f5d0bed4d82" May 16 16:11:38.481685 kubelet[2672]: I0516 16:11:38.481632 2672 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 16:11:38.557370 systemd[1]: Started sshd@22-10.0.0.50:22-10.0.0.1:41122.service - OpenSSH per-connection server daemon (10.0.0.1:41122). May 16 16:11:38.602250 sshd[5335]: Accepted publickey for core from 10.0.0.1 port 41122 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:38.603369 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:38.607174 systemd-logind[1508]: New session 23 of user core. May 16 16:11:38.615112 systemd[1]: Started session-23.scope - Session 23 of User core. May 16 16:11:38.744041 sshd[5337]: Connection closed by 10.0.0.1 port 41122 May 16 16:11:38.744725 sshd-session[5335]: pam_unix(sshd:session): session closed for user core May 16 16:11:38.748229 systemd[1]: sshd@22-10.0.0.50:22-10.0.0.1:41122.service: Deactivated successfully. May 16 16:11:38.751410 systemd[1]: session-23.scope: Deactivated successfully. May 16 16:11:38.752312 systemd-logind[1508]: Session 23 logged out. Waiting for processes to exit. May 16 16:11:38.753444 systemd-logind[1508]: Removed session 23. May 16 16:11:39.249801 containerd[1526]: time="2025-05-16T16:11:39.249764100Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0f33ee279aabff9d112860270c8e3d4eb1cc41dd54925f04aef96451e4fdbfb2\" id:\"e79723a2fe40b25788d8e945df84b27d10c0940bacd7b66b3c73374035bc1d9b\" pid:5361 exited_at:{seconds:1747411899 nanos:249492018}" May 16 16:11:43.760150 systemd[1]: Started sshd@23-10.0.0.50:22-10.0.0.1:52108.service - OpenSSH per-connection server daemon (10.0.0.1:52108). May 16 16:11:43.813264 sshd[5375]: Accepted publickey for core from 10.0.0.1 port 52108 ssh2: RSA SHA256:bkgptDe9jseKO6+aQtJKCzW+g5Mhm35Zmmkqt5qGuGI May 16 16:11:43.814584 sshd-session[5375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 16:11:43.818944 systemd-logind[1508]: New session 24 of user core. May 16 16:11:43.826021 systemd[1]: Started session-24.scope - Session 24 of User core. May 16 16:11:43.998939 kubelet[2672]: E0516 16:11:43.998906 2672 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 16:11:44.017059 sshd[5377]: Connection closed by 10.0.0.1 port 52108 May 16 16:11:44.017551 sshd-session[5375]: pam_unix(sshd:session): session closed for user core May 16 16:11:44.020956 systemd[1]: sshd@23-10.0.0.50:22-10.0.0.1:52108.service: Deactivated successfully. May 16 16:11:44.022710 systemd[1]: session-24.scope: Deactivated successfully. May 16 16:11:44.023432 systemd-logind[1508]: Session 24 logged out. Waiting for processes to exit. May 16 16:11:44.024551 systemd-logind[1508]: Removed session 24. May 16 16:11:45.002024 containerd[1526]: time="2025-05-16T16:11:45.000302537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 16:11:45.169349 containerd[1526]: time="2025-05-16T16:11:45.169182497Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 16:11:45.170173 containerd[1526]: time="2025-05-16T16:11:45.170118182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 16:11:45.170281 containerd[1526]: time="2025-05-16T16:11:45.170185102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 16:11:45.170370 kubelet[2672]: E0516 16:11:45.170332 2672 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:11:45.170635 kubelet[2672]: E0516 16:11:45.170380 2672 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 16:11:45.170635 kubelet[2672]: E0516 16:11:45.170509 2672 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8r7lt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-5hgdf_calico-system(052dd569-b80c-4bbb-b6f6-acc75ce14539): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 16:11:45.172709 kubelet[2672]: E0516 16:11:45.171990 2672 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-5hgdf" podUID="052dd569-b80c-4bbb-b6f6-acc75ce14539"