May 16 04:59:45.804796 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 04:59:45.804815 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri May 16 03:56:41 -00 2025 May 16 04:59:45.804824 kernel: KASLR enabled May 16 04:59:45.804830 kernel: efi: EFI v2.7 by EDK II May 16 04:59:45.804835 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 16 04:59:45.804840 kernel: random: crng init done May 16 04:59:45.804847 kernel: secureboot: Secure boot disabled May 16 04:59:45.804852 kernel: ACPI: Early table checksum verification disabled May 16 04:59:45.804858 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 16 04:59:45.804865 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 04:59:45.804871 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 04:59:45.804876 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 04:59:45.804882 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 04:59:45.804888 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 04:59:45.804895 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 04:59:45.804902 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 04:59:45.804908 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 04:59:45.804914 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 04:59:45.804920 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 04:59:45.804926 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 04:59:45.804932 kernel: ACPI: Use ACPI SPCR as default console: Yes May 16 04:59:45.804938 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 04:59:45.804944 kernel: NODE_DATA(0) allocated [mem 0xdc965dc0-0xdc96cfff] May 16 04:59:45.804950 kernel: Zone ranges: May 16 04:59:45.804956 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 04:59:45.804963 kernel: DMA32 empty May 16 04:59:45.804969 kernel: Normal empty May 16 04:59:45.804974 kernel: Device empty May 16 04:59:45.804980 kernel: Movable zone start for each node May 16 04:59:45.804986 kernel: Early memory node ranges May 16 04:59:45.804992 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 16 04:59:45.804998 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 16 04:59:45.805004 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 16 04:59:45.805010 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 16 04:59:45.805016 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 16 04:59:45.805022 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 16 04:59:45.805027 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 16 04:59:45.805034 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 16 04:59:45.805040 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 16 04:59:45.805046 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 16 04:59:45.805055 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 16 04:59:45.805061 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 16 04:59:45.805067 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 16 04:59:45.805075 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 04:59:45.805082 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 04:59:45.805088 kernel: psci: probing for conduit method from ACPI. May 16 04:59:45.805094 kernel: psci: PSCIv1.1 detected in firmware. May 16 04:59:45.805100 kernel: psci: Using standard PSCI v0.2 function IDs May 16 04:59:45.805107 kernel: psci: Trusted OS migration not required May 16 04:59:45.805113 kernel: psci: SMC Calling Convention v1.1 May 16 04:59:45.805119 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 04:59:45.805126 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 16 04:59:45.805132 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 16 04:59:45.805140 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 04:59:45.805146 kernel: Detected PIPT I-cache on CPU0 May 16 04:59:45.805152 kernel: CPU features: detected: GIC system register CPU interface May 16 04:59:45.805159 kernel: CPU features: detected: Spectre-v4 May 16 04:59:45.805165 kernel: CPU features: detected: Spectre-BHB May 16 04:59:45.805171 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 04:59:45.805178 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 04:59:45.805184 kernel: CPU features: detected: ARM erratum 1418040 May 16 04:59:45.805190 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 04:59:45.805196 kernel: alternatives: applying boot alternatives May 16 04:59:45.805204 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=abbfb9746d78592232fcb9b7326d67df09132d2302b7a6f1ad1c8c20f1763b7c May 16 04:59:45.805212 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 04:59:45.805218 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 04:59:45.805225 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 04:59:45.805259 kernel: Fallback order for Node 0: 0 May 16 04:59:45.805266 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 16 04:59:45.805272 kernel: Policy zone: DMA May 16 04:59:45.805279 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 04:59:45.805285 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 16 04:59:45.805291 kernel: software IO TLB: area num 4. May 16 04:59:45.805298 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 16 04:59:45.805304 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 16 04:59:45.805311 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 04:59:45.805320 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 04:59:45.805327 kernel: rcu: RCU event tracing is enabled. May 16 04:59:45.805334 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 04:59:45.805340 kernel: Trampoline variant of Tasks RCU enabled. May 16 04:59:45.805346 kernel: Tracing variant of Tasks RCU enabled. May 16 04:59:45.805353 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 04:59:45.805360 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 04:59:45.805366 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 04:59:45.805372 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 16 04:59:45.805379 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 04:59:45.805385 kernel: GICv3: 256 SPIs implemented May 16 04:59:45.805393 kernel: GICv3: 0 Extended SPIs implemented May 16 04:59:45.805399 kernel: Root IRQ handler: gic_handle_irq May 16 04:59:45.805406 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 16 04:59:45.805412 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 16 04:59:45.805418 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 04:59:45.805424 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 04:59:45.805431 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 16 04:59:45.805437 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 16 04:59:45.805444 kernel: GICv3: using LPI property table @0x0000000040100000 May 16 04:59:45.805450 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 16 04:59:45.805456 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 16 04:59:45.805463 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 04:59:45.805470 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 04:59:45.805477 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 04:59:45.805483 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 04:59:45.805490 kernel: arm-pv: using stolen time PV May 16 04:59:45.805496 kernel: Console: colour dummy device 80x25 May 16 04:59:45.805503 kernel: ACPI: Core revision 20240827 May 16 04:59:45.805510 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 04:59:45.805516 kernel: pid_max: default: 32768 minimum: 301 May 16 04:59:45.805523 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 16 04:59:45.805530 kernel: landlock: Up and running. May 16 04:59:45.805537 kernel: SELinux: Initializing. May 16 04:59:45.805543 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 04:59:45.805550 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 04:59:45.805557 kernel: rcu: Hierarchical SRCU implementation. May 16 04:59:45.805563 kernel: rcu: Max phase no-delay instances is 400. May 16 04:59:45.805570 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 16 04:59:45.805576 kernel: Remapping and enabling EFI services. May 16 04:59:45.805583 kernel: smp: Bringing up secondary CPUs ... May 16 04:59:45.805590 kernel: Detected PIPT I-cache on CPU1 May 16 04:59:45.805602 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 04:59:45.805609 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 16 04:59:45.805617 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 04:59:45.805624 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 04:59:45.805631 kernel: Detected PIPT I-cache on CPU2 May 16 04:59:45.805637 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 04:59:45.805644 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 16 04:59:45.805653 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 04:59:45.805659 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 04:59:45.805666 kernel: Detected PIPT I-cache on CPU3 May 16 04:59:45.805673 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 04:59:45.805680 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 16 04:59:45.805687 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 04:59:45.805693 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 04:59:45.805700 kernel: smp: Brought up 1 node, 4 CPUs May 16 04:59:45.805707 kernel: SMP: Total of 4 processors activated. May 16 04:59:45.805714 kernel: CPU: All CPU(s) started at EL1 May 16 04:59:45.805722 kernel: CPU features: detected: 32-bit EL0 Support May 16 04:59:45.805729 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 04:59:45.805735 kernel: CPU features: detected: Common not Private translations May 16 04:59:45.805742 kernel: CPU features: detected: CRC32 instructions May 16 04:59:45.805749 kernel: CPU features: detected: Enhanced Virtualization Traps May 16 04:59:45.805756 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 04:59:45.805763 kernel: CPU features: detected: LSE atomic instructions May 16 04:59:45.805770 kernel: CPU features: detected: Privileged Access Never May 16 04:59:45.805777 kernel: CPU features: detected: RAS Extension Support May 16 04:59:45.805785 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 04:59:45.805792 kernel: alternatives: applying system-wide alternatives May 16 04:59:45.805798 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 16 04:59:45.805806 kernel: Memory: 2440984K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125536K reserved, 0K cma-reserved) May 16 04:59:45.805813 kernel: devtmpfs: initialized May 16 04:59:45.805820 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 04:59:45.805827 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 04:59:45.805834 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 16 04:59:45.805840 kernel: 0 pages in range for non-PLT usage May 16 04:59:45.805848 kernel: 508544 pages in range for PLT usage May 16 04:59:45.805855 kernel: pinctrl core: initialized pinctrl subsystem May 16 04:59:45.805862 kernel: SMBIOS 3.0.0 present. May 16 04:59:45.805869 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 16 04:59:45.805876 kernel: DMI: Memory slots populated: 1/1 May 16 04:59:45.805882 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 04:59:45.805889 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 04:59:45.805896 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 04:59:45.805903 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 04:59:45.805911 kernel: audit: initializing netlink subsys (disabled) May 16 04:59:45.805918 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 May 16 04:59:45.805925 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 04:59:45.805932 kernel: cpuidle: using governor menu May 16 04:59:45.805939 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 04:59:45.805946 kernel: ASID allocator initialised with 32768 entries May 16 04:59:45.805953 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 04:59:45.805959 kernel: Serial: AMBA PL011 UART driver May 16 04:59:45.805966 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 16 04:59:45.805975 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 16 04:59:45.805981 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 16 04:59:45.805988 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 16 04:59:45.805995 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 16 04:59:45.806002 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 16 04:59:45.806009 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 16 04:59:45.806016 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 16 04:59:45.806022 kernel: ACPI: Added _OSI(Module Device) May 16 04:59:45.806029 kernel: ACPI: Added _OSI(Processor Device) May 16 04:59:45.806037 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 04:59:45.806044 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 04:59:45.806051 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 04:59:45.806058 kernel: ACPI: Interpreter enabled May 16 04:59:45.806064 kernel: ACPI: Using GIC for interrupt routing May 16 04:59:45.806071 kernel: ACPI: MCFG table detected, 1 entries May 16 04:59:45.806078 kernel: ACPI: CPU0 has been hot-added May 16 04:59:45.806085 kernel: ACPI: CPU1 has been hot-added May 16 04:59:45.806091 kernel: ACPI: CPU2 has been hot-added May 16 04:59:45.806099 kernel: ACPI: CPU3 has been hot-added May 16 04:59:45.806106 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 04:59:45.806113 kernel: printk: legacy console [ttyAMA0] enabled May 16 04:59:45.806120 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 04:59:45.806289 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 04:59:45.806362 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 04:59:45.806422 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 04:59:45.806482 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 04:59:45.806542 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 04:59:45.806552 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 04:59:45.806559 kernel: PCI host bridge to bus 0000:00 May 16 04:59:45.806626 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 04:59:45.806683 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 04:59:45.806736 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 04:59:45.806788 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 04:59:45.806862 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 16 04:59:45.806937 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 16 04:59:45.806998 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 16 04:59:45.807058 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 16 04:59:45.807117 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 16 04:59:45.807175 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 16 04:59:45.807263 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 16 04:59:45.807334 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 16 04:59:45.807390 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 04:59:45.807442 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 04:59:45.807494 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 04:59:45.807503 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 04:59:45.807510 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 04:59:45.807517 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 04:59:45.807526 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 04:59:45.807532 kernel: iommu: Default domain type: Translated May 16 04:59:45.807539 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 04:59:45.807546 kernel: efivars: Registered efivars operations May 16 04:59:45.807553 kernel: vgaarb: loaded May 16 04:59:45.807560 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 04:59:45.807566 kernel: VFS: Disk quotas dquot_6.6.0 May 16 04:59:45.807574 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 04:59:45.807580 kernel: pnp: PnP ACPI init May 16 04:59:45.807645 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 04:59:45.807655 kernel: pnp: PnP ACPI: found 1 devices May 16 04:59:45.807661 kernel: NET: Registered PF_INET protocol family May 16 04:59:45.807669 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 04:59:45.807675 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 04:59:45.807682 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 04:59:45.807689 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 04:59:45.807696 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 16 04:59:45.807704 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 04:59:45.807711 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 04:59:45.807718 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 04:59:45.807725 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 04:59:45.807732 kernel: PCI: CLS 0 bytes, default 64 May 16 04:59:45.807739 kernel: kvm [1]: HYP mode not available May 16 04:59:45.807746 kernel: Initialise system trusted keyrings May 16 04:59:45.807753 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 04:59:45.807760 kernel: Key type asymmetric registered May 16 04:59:45.807767 kernel: Asymmetric key parser 'x509' registered May 16 04:59:45.807775 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 04:59:45.807782 kernel: io scheduler mq-deadline registered May 16 04:59:45.807788 kernel: io scheduler kyber registered May 16 04:59:45.807795 kernel: io scheduler bfq registered May 16 04:59:45.807802 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 04:59:45.807809 kernel: ACPI: button: Power Button [PWRB] May 16 04:59:45.807816 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 04:59:45.807876 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 04:59:45.807887 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 04:59:45.807894 kernel: thunder_xcv, ver 1.0 May 16 04:59:45.807900 kernel: thunder_bgx, ver 1.0 May 16 04:59:45.807907 kernel: nicpf, ver 1.0 May 16 04:59:45.807914 kernel: nicvf, ver 1.0 May 16 04:59:45.807982 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 04:59:45.808040 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T04:59:45 UTC (1747371585) May 16 04:59:45.808049 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 04:59:45.808058 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 16 04:59:45.808065 kernel: watchdog: NMI not fully supported May 16 04:59:45.808072 kernel: watchdog: Hard watchdog permanently disabled May 16 04:59:45.808079 kernel: NET: Registered PF_INET6 protocol family May 16 04:59:45.808085 kernel: Segment Routing with IPv6 May 16 04:59:45.808092 kernel: In-situ OAM (IOAM) with IPv6 May 16 04:59:45.808099 kernel: NET: Registered PF_PACKET protocol family May 16 04:59:45.808107 kernel: Key type dns_resolver registered May 16 04:59:45.808113 kernel: registered taskstats version 1 May 16 04:59:45.808122 kernel: Loading compiled-in X.509 certificates May 16 04:59:45.808129 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 8b7bfdcbacbb780d2c84be81ca364eb79330b685' May 16 04:59:45.808136 kernel: Demotion targets for Node 0: null May 16 04:59:45.808143 kernel: Key type .fscrypt registered May 16 04:59:45.808150 kernel: Key type fscrypt-provisioning registered May 16 04:59:45.808157 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 04:59:45.808164 kernel: ima: Allocated hash algorithm: sha1 May 16 04:59:45.808171 kernel: ima: No architecture policies found May 16 04:59:45.808177 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 04:59:45.808185 kernel: clk: Disabling unused clocks May 16 04:59:45.808192 kernel: PM: genpd: Disabling unused power domains May 16 04:59:45.808199 kernel: Warning: unable to open an initial console. May 16 04:59:45.808206 kernel: Freeing unused kernel memory: 39424K May 16 04:59:45.808213 kernel: Run /init as init process May 16 04:59:45.808220 kernel: with arguments: May 16 04:59:45.808226 kernel: /init May 16 04:59:45.808278 kernel: with environment: May 16 04:59:45.808286 kernel: HOME=/ May 16 04:59:45.808295 kernel: TERM=linux May 16 04:59:45.808302 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 04:59:45.808309 systemd[1]: Successfully made /usr/ read-only. May 16 04:59:45.808319 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 04:59:45.808327 systemd[1]: Detected virtualization kvm. May 16 04:59:45.808334 systemd[1]: Detected architecture arm64. May 16 04:59:45.808341 systemd[1]: Running in initrd. May 16 04:59:45.808348 systemd[1]: No hostname configured, using default hostname. May 16 04:59:45.808357 systemd[1]: Hostname set to . May 16 04:59:45.808364 systemd[1]: Initializing machine ID from VM UUID. May 16 04:59:45.808372 systemd[1]: Queued start job for default target initrd.target. May 16 04:59:45.808379 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 04:59:45.808386 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 04:59:45.808394 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 16 04:59:45.808402 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 04:59:45.808409 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 16 04:59:45.808419 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 16 04:59:45.808428 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 16 04:59:45.808435 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 16 04:59:45.808442 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 04:59:45.808450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 04:59:45.808457 systemd[1]: Reached target paths.target - Path Units. May 16 04:59:45.808466 systemd[1]: Reached target slices.target - Slice Units. May 16 04:59:45.808473 systemd[1]: Reached target swap.target - Swaps. May 16 04:59:45.808480 systemd[1]: Reached target timers.target - Timer Units. May 16 04:59:45.808488 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 16 04:59:45.808495 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 04:59:45.808502 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 16 04:59:45.808510 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 16 04:59:45.808517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 04:59:45.808524 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 04:59:45.808533 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 04:59:45.808541 systemd[1]: Reached target sockets.target - Socket Units. May 16 04:59:45.808548 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 16 04:59:45.808555 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 04:59:45.808563 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 16 04:59:45.808571 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 16 04:59:45.808578 systemd[1]: Starting systemd-fsck-usr.service... May 16 04:59:45.808585 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 04:59:45.808594 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 04:59:45.808601 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 04:59:45.808608 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 04:59:45.808616 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 16 04:59:45.808624 systemd[1]: Finished systemd-fsck-usr.service. May 16 04:59:45.808632 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 04:59:45.808658 systemd-journald[243]: Collecting audit messages is disabled. May 16 04:59:45.808677 systemd-journald[243]: Journal started May 16 04:59:45.808701 systemd-journald[243]: Runtime Journal (/run/log/journal/075b7df246914ef8923022593784f009) is 6M, max 48.5M, 42.4M free. May 16 04:59:45.800731 systemd-modules-load[244]: Inserted module 'overlay' May 16 04:59:45.812393 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 04:59:45.815358 systemd[1]: Started systemd-journald.service - Journal Service. May 16 04:59:45.816666 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 04:59:45.820653 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 04:59:45.820598 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 16 04:59:45.823327 kernel: Bridge firewalling registered May 16 04:59:45.822034 systemd-modules-load[244]: Inserted module 'br_netfilter' May 16 04:59:45.822675 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 04:59:45.831729 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 04:59:45.833154 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 04:59:45.837661 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 04:59:45.839309 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 04:59:45.847577 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 04:59:45.850826 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 16 04:59:45.853445 systemd-tmpfiles[268]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 16 04:59:45.856169 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 04:59:45.857475 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 04:59:45.860717 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 04:59:45.868186 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=abbfb9746d78592232fcb9b7326d67df09132d2302b7a6f1ad1c8c20f1763b7c May 16 04:59:45.897164 systemd-resolved[296]: Positive Trust Anchors: May 16 04:59:45.897184 systemd-resolved[296]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 04:59:45.897215 systemd-resolved[296]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 04:59:45.901828 systemd-resolved[296]: Defaulting to hostname 'linux'. May 16 04:59:45.905443 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 04:59:45.908819 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 04:59:45.938259 kernel: SCSI subsystem initialized May 16 04:59:45.943255 kernel: Loading iSCSI transport class v2.0-870. May 16 04:59:45.950258 kernel: iscsi: registered transport (tcp) May 16 04:59:45.963268 kernel: iscsi: registered transport (qla4xxx) May 16 04:59:45.963305 kernel: QLogic iSCSI HBA Driver May 16 04:59:45.980951 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 04:59:46.000182 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 04:59:46.002207 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 04:59:46.047027 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 16 04:59:46.049349 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 16 04:59:46.110275 kernel: raid6: neonx8 gen() 15752 MB/s May 16 04:59:46.127270 kernel: raid6: neonx4 gen() 15758 MB/s May 16 04:59:46.144263 kernel: raid6: neonx2 gen() 13193 MB/s May 16 04:59:46.161278 kernel: raid6: neonx1 gen() 10437 MB/s May 16 04:59:46.178263 kernel: raid6: int64x8 gen() 6870 MB/s May 16 04:59:46.195260 kernel: raid6: int64x4 gen() 7322 MB/s May 16 04:59:46.212264 kernel: raid6: int64x2 gen() 6086 MB/s May 16 04:59:46.229368 kernel: raid6: int64x1 gen() 5049 MB/s May 16 04:59:46.229380 kernel: raid6: using algorithm neonx4 gen() 15758 MB/s May 16 04:59:46.247377 kernel: raid6: .... xor() 12317 MB/s, rmw enabled May 16 04:59:46.247395 kernel: raid6: using neon recovery algorithm May 16 04:59:46.252255 kernel: xor: measuring software checksum speed May 16 04:59:46.253537 kernel: 8regs : 17742 MB/sec May 16 04:59:46.253549 kernel: 32regs : 21199 MB/sec May 16 04:59:46.254841 kernel: arm64_neon : 27974 MB/sec May 16 04:59:46.254870 kernel: xor: using function: arm64_neon (27974 MB/sec) May 16 04:59:46.311261 kernel: Btrfs loaded, zoned=no, fsverity=no May 16 04:59:46.318133 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 16 04:59:46.320588 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 04:59:46.350883 systemd-udevd[501]: Using default interface naming scheme 'v255'. May 16 04:59:46.358045 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 04:59:46.360031 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 16 04:59:46.389912 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation May 16 04:59:46.417788 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 16 04:59:46.420268 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 04:59:46.474131 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 04:59:46.476875 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 16 04:59:46.519776 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 16 04:59:46.533936 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 04:59:46.534041 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 04:59:46.534052 kernel: GPT:9289727 != 19775487 May 16 04:59:46.534061 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 04:59:46.534070 kernel: GPT:9289727 != 19775487 May 16 04:59:46.534078 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 04:59:46.534086 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 04:59:46.530896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 04:59:46.531011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 04:59:46.533409 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 16 04:59:46.535065 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 04:59:46.561428 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 04:59:46.569021 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 16 04:59:46.570533 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 16 04:59:46.583442 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 16 04:59:46.590869 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 04:59:46.597074 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 16 04:59:46.598300 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 16 04:59:46.601296 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 16 04:59:46.603519 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 04:59:46.605597 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 04:59:46.608288 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 16 04:59:46.609873 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 16 04:59:46.627925 disk-uuid[593]: Primary Header is updated. May 16 04:59:46.627925 disk-uuid[593]: Secondary Entries is updated. May 16 04:59:46.627925 disk-uuid[593]: Secondary Header is updated. May 16 04:59:46.631476 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 16 04:59:46.635257 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 04:59:47.642301 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 04:59:47.644606 disk-uuid[599]: The operation has completed successfully. May 16 04:59:47.677347 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 04:59:47.677446 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 16 04:59:47.698045 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 16 04:59:47.725972 sh[613]: Success May 16 04:59:47.740833 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 04:59:47.742655 kernel: device-mapper: uevent: version 1.0.3 May 16 04:59:47.742690 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 16 04:59:47.754938 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 16 04:59:47.780958 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 16 04:59:47.783947 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 16 04:59:47.797362 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 16 04:59:47.805039 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 16 04:59:47.805070 kernel: BTRFS: device fsid f0e5009f-ca00-4958-88ba-c503d4f8d676 devid 1 transid 42 /dev/mapper/usr (253:0) scanned by mount (625) May 16 04:59:47.807333 kernel: BTRFS info (device dm-0): first mount of filesystem f0e5009f-ca00-4958-88ba-c503d4f8d676 May 16 04:59:47.807373 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 16 04:59:47.807392 kernel: BTRFS info (device dm-0): using free-space-tree May 16 04:59:47.812817 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 16 04:59:47.814007 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 16 04:59:47.815435 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 16 04:59:47.816138 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 16 04:59:47.817654 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 16 04:59:47.840259 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (656) May 16 04:59:47.842669 kernel: BTRFS info (device vda6): first mount of filesystem fd50d08f-08fe-4d25-905f-c58fa90a20e3 May 16 04:59:47.842700 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 04:59:47.842711 kernel: BTRFS info (device vda6): using free-space-tree May 16 04:59:47.853257 kernel: BTRFS info (device vda6): last unmount of filesystem fd50d08f-08fe-4d25-905f-c58fa90a20e3 May 16 04:59:47.854653 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 16 04:59:47.856919 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 16 04:59:47.915210 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 04:59:47.919757 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 04:59:47.954713 systemd-networkd[801]: lo: Link UP May 16 04:59:47.954725 systemd-networkd[801]: lo: Gained carrier May 16 04:59:47.955465 systemd-networkd[801]: Enumeration completed May 16 04:59:47.955843 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 04:59:47.955847 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 04:59:47.956534 systemd-networkd[801]: eth0: Link UP May 16 04:59:47.956537 systemd-networkd[801]: eth0: Gained carrier May 16 04:59:47.956545 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 04:59:47.957686 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 04:59:47.958708 systemd[1]: Reached target network.target - Network. May 16 04:59:47.980282 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 04:59:47.991210 ignition[713]: Ignition 2.21.0 May 16 04:59:47.991227 ignition[713]: Stage: fetch-offline May 16 04:59:47.991277 ignition[713]: no configs at "/usr/lib/ignition/base.d" May 16 04:59:47.991286 ignition[713]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 04:59:47.991487 ignition[713]: parsed url from cmdline: "" May 16 04:59:47.991493 ignition[713]: no config URL provided May 16 04:59:47.991498 ignition[713]: reading system config file "/usr/lib/ignition/user.ign" May 16 04:59:47.991505 ignition[713]: no config at "/usr/lib/ignition/user.ign" May 16 04:59:47.991524 ignition[713]: op(1): [started] loading QEMU firmware config module May 16 04:59:47.991532 ignition[713]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 04:59:47.998205 ignition[713]: op(1): [finished] loading QEMU firmware config module May 16 04:59:48.036578 ignition[713]: parsing config with SHA512: 49b5b5cb55b92e0bf1fd1eb5d4d7bcd195c9a1f54b18b8e8230f3af1127a9eb79f7b71cb672877fc0d163abe3b9634005a13c70964b4dcd4236a6d80ace73676 May 16 04:59:48.040444 unknown[713]: fetched base config from "system" May 16 04:59:48.040455 unknown[713]: fetched user config from "qemu" May 16 04:59:48.040780 ignition[713]: fetch-offline: fetch-offline passed May 16 04:59:48.040830 ignition[713]: Ignition finished successfully May 16 04:59:48.043128 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 16 04:59:48.044585 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 04:59:48.047351 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 16 04:59:48.069058 ignition[815]: Ignition 2.21.0 May 16 04:59:48.069077 ignition[815]: Stage: kargs May 16 04:59:48.069209 ignition[815]: no configs at "/usr/lib/ignition/base.d" May 16 04:59:48.069218 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 04:59:48.071520 ignition[815]: kargs: kargs passed May 16 04:59:48.071583 ignition[815]: Ignition finished successfully May 16 04:59:48.073801 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 16 04:59:48.075889 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 16 04:59:48.100035 ignition[823]: Ignition 2.21.0 May 16 04:59:48.100049 ignition[823]: Stage: disks May 16 04:59:48.100177 ignition[823]: no configs at "/usr/lib/ignition/base.d" May 16 04:59:48.100185 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 04:59:48.101490 ignition[823]: disks: disks passed May 16 04:59:48.103094 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 16 04:59:48.101536 ignition[823]: Ignition finished successfully May 16 04:59:48.104409 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 16 04:59:48.105670 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 16 04:59:48.107433 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 04:59:48.108889 systemd[1]: Reached target sysinit.target - System Initialization. May 16 04:59:48.110627 systemd[1]: Reached target basic.target - Basic System. May 16 04:59:48.113125 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 16 04:59:48.137982 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 16 04:59:48.142077 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 16 04:59:48.144124 systemd[1]: Mounting sysroot.mount - /sysroot... May 16 04:59:48.205256 kernel: EXT4-fs (vda9): mounted filesystem 8d9350e8-8974-4dcc-bef7-5de048368570 r/w with ordered data mode. Quota mode: none. May 16 04:59:48.206184 systemd[1]: Mounted sysroot.mount - /sysroot. May 16 04:59:48.207457 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 16 04:59:48.209708 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 04:59:48.211164 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 16 04:59:48.212153 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 16 04:59:48.212205 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 04:59:48.212283 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 16 04:59:48.221335 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 16 04:59:48.223723 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 16 04:59:48.226645 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (841) May 16 04:59:48.229049 kernel: BTRFS info (device vda6): first mount of filesystem fd50d08f-08fe-4d25-905f-c58fa90a20e3 May 16 04:59:48.229076 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 04:59:48.229087 kernel: BTRFS info (device vda6): using free-space-tree May 16 04:59:48.233122 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 04:59:48.271167 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory May 16 04:59:48.275203 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory May 16 04:59:48.278938 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory May 16 04:59:48.281751 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory May 16 04:59:48.348959 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 16 04:59:48.350868 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 16 04:59:48.352305 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 16 04:59:48.368259 kernel: BTRFS info (device vda6): last unmount of filesystem fd50d08f-08fe-4d25-905f-c58fa90a20e3 May 16 04:59:48.384327 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 16 04:59:48.395486 ignition[956]: INFO : Ignition 2.21.0 May 16 04:59:48.395486 ignition[956]: INFO : Stage: mount May 16 04:59:48.397981 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 04:59:48.397981 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 04:59:48.399963 ignition[956]: INFO : mount: mount passed May 16 04:59:48.399963 ignition[956]: INFO : Ignition finished successfully May 16 04:59:48.401288 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 16 04:59:48.403079 systemd[1]: Starting ignition-files.service - Ignition (files)... May 16 04:59:48.803895 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 16 04:59:48.805345 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 16 04:59:48.825250 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (969) May 16 04:59:48.827509 kernel: BTRFS info (device vda6): first mount of filesystem fd50d08f-08fe-4d25-905f-c58fa90a20e3 May 16 04:59:48.827527 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 04:59:48.827537 kernel: BTRFS info (device vda6): using free-space-tree May 16 04:59:48.831601 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 16 04:59:48.865733 ignition[986]: INFO : Ignition 2.21.0 May 16 04:59:48.865733 ignition[986]: INFO : Stage: files May 16 04:59:48.867977 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 04:59:48.867977 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 04:59:48.867977 ignition[986]: DEBUG : files: compiled without relabeling support, skipping May 16 04:59:48.871040 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 04:59:48.871040 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 04:59:48.871040 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 04:59:48.871040 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 04:59:48.871040 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 04:59:48.870867 unknown[986]: wrote ssh authorized keys file for user: core May 16 04:59:48.877929 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 16 04:59:48.877929 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 16 04:59:48.997717 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 04:59:49.260112 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 16 04:59:49.260112 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 04:59:49.263666 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 04:59:49.263666 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 04:59:49.263666 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 04:59:49.263666 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 04:59:49.263666 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 04:59:49.263666 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 04:59:49.263666 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 04:59:49.273956 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 04:59:49.273956 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 04:59:49.273956 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 04:59:49.273956 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 04:59:49.273956 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 04:59:49.273956 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 16 04:59:49.883423 systemd-networkd[801]: eth0: Gained IPv6LL May 16 04:59:49.954521 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 16 04:59:50.217586 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 04:59:50.217586 ignition[986]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 16 04:59:50.221085 ignition[986]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 04:59:50.224489 ignition[986]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 04:59:50.224489 ignition[986]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 16 04:59:50.224489 ignition[986]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 16 04:59:50.228727 ignition[986]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 04:59:50.228727 ignition[986]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 04:59:50.228727 ignition[986]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 16 04:59:50.228727 ignition[986]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 16 04:59:50.241181 ignition[986]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 04:59:50.244394 ignition[986]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 04:59:50.247086 ignition[986]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 16 04:59:50.247086 ignition[986]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 16 04:59:50.247086 ignition[986]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 16 04:59:50.247086 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 04:59:50.247086 ignition[986]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 04:59:50.247086 ignition[986]: INFO : files: files passed May 16 04:59:50.247086 ignition[986]: INFO : Ignition finished successfully May 16 04:59:50.247869 systemd[1]: Finished ignition-files.service - Ignition (files). May 16 04:59:50.250353 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 16 04:59:50.253396 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 16 04:59:50.267657 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 04:59:50.267749 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 16 04:59:50.270895 initrd-setup-root-after-ignition[1015]: grep: /sysroot/oem/oem-release: No such file or directory May 16 04:59:50.272191 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 04:59:50.272191 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 16 04:59:50.274846 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 04:59:50.274137 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 04:59:50.276019 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 16 04:59:50.278852 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 16 04:59:50.307670 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 04:59:50.307795 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 16 04:59:50.309687 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 16 04:59:50.311224 systemd[1]: Reached target initrd.target - Initrd Default Target. May 16 04:59:50.312979 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 16 04:59:50.313669 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 16 04:59:50.346268 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 04:59:50.348559 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 16 04:59:50.367661 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 16 04:59:50.368762 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 04:59:50.370534 systemd[1]: Stopped target timers.target - Timer Units. May 16 04:59:50.372074 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 04:59:50.372189 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 16 04:59:50.374513 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 16 04:59:50.376171 systemd[1]: Stopped target basic.target - Basic System. May 16 04:59:50.377695 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 16 04:59:50.379205 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 16 04:59:50.380999 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 16 04:59:50.382818 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 16 04:59:50.384489 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 16 04:59:50.386110 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 16 04:59:50.387898 systemd[1]: Stopped target sysinit.target - System Initialization. May 16 04:59:50.389599 systemd[1]: Stopped target local-fs.target - Local File Systems. May 16 04:59:50.391186 systemd[1]: Stopped target swap.target - Swaps. May 16 04:59:50.392546 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 04:59:50.392655 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 16 04:59:50.394733 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 16 04:59:50.396419 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 04:59:50.398153 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 16 04:59:50.401317 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 04:59:50.402523 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 04:59:50.402629 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 16 04:59:50.405059 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 04:59:50.405163 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 16 04:59:50.407058 systemd[1]: Stopped target paths.target - Path Units. May 16 04:59:50.408436 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 04:59:50.408547 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 04:59:50.410352 systemd[1]: Stopped target slices.target - Slice Units. May 16 04:59:50.411785 systemd[1]: Stopped target sockets.target - Socket Units. May 16 04:59:50.413263 systemd[1]: iscsid.socket: Deactivated successfully. May 16 04:59:50.413347 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 16 04:59:50.415320 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 04:59:50.415399 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 16 04:59:50.416859 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 04:59:50.416973 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 16 04:59:50.418441 systemd[1]: ignition-files.service: Deactivated successfully. May 16 04:59:50.418543 systemd[1]: Stopped ignition-files.service - Ignition (files). May 16 04:59:50.420608 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 16 04:59:50.422559 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 16 04:59:50.423323 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 04:59:50.423441 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 16 04:59:50.425271 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 04:59:50.425375 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 16 04:59:50.430490 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 04:59:50.430563 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 16 04:59:50.438095 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 04:59:50.446796 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 04:59:50.448668 ignition[1041]: INFO : Ignition 2.21.0 May 16 04:59:50.448668 ignition[1041]: INFO : Stage: umount May 16 04:59:50.448668 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 04:59:50.448668 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 04:59:50.448668 ignition[1041]: INFO : umount: umount passed May 16 04:59:50.448668 ignition[1041]: INFO : Ignition finished successfully May 16 04:59:50.446892 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 16 04:59:50.448207 systemd[1]: Stopped target network.target - Network. May 16 04:59:50.449492 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 04:59:50.449551 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 16 04:59:50.450907 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 04:59:50.450948 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 16 04:59:50.452137 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 04:59:50.452180 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 16 04:59:50.453738 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 16 04:59:50.453776 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 16 04:59:50.455549 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 16 04:59:50.458332 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 16 04:59:50.462355 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 04:59:50.462451 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 16 04:59:50.465128 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 16 04:59:50.465360 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 16 04:59:50.465395 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 04:59:50.468629 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 16 04:59:50.471122 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 04:59:50.471213 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 16 04:59:50.474273 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 16 04:59:50.474357 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 16 04:59:50.475627 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 04:59:50.475662 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 16 04:59:50.478128 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 16 04:59:50.482354 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 04:59:50.482461 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 16 04:59:50.485475 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 04:59:50.485519 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 16 04:59:50.488821 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 04:59:50.488861 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 16 04:59:50.489954 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 04:59:50.493492 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 04:59:50.493744 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 04:59:50.493818 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 16 04:59:50.495549 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 04:59:50.495623 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 16 04:59:50.500448 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 04:59:50.500551 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 16 04:59:50.505920 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 04:59:50.506047 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 04:59:50.507920 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 04:59:50.507954 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 16 04:59:50.509377 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 04:59:50.509405 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 16 04:59:50.510975 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 04:59:50.511020 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 16 04:59:50.513348 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 04:59:50.513390 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 16 04:59:50.515616 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 04:59:50.515659 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 16 04:59:50.518932 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 16 04:59:50.519999 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 16 04:59:50.520049 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 16 04:59:50.522681 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 04:59:50.522721 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 04:59:50.525376 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 16 04:59:50.525414 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 04:59:50.527933 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 04:59:50.527971 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 16 04:59:50.529782 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 04:59:50.529823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 16 04:59:50.534763 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 04:59:50.534839 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 16 04:59:50.536759 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 16 04:59:50.538343 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 16 04:59:50.549420 systemd[1]: Switching root. May 16 04:59:50.590566 systemd-journald[243]: Journal stopped May 16 04:59:51.330848 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). May 16 04:59:51.330898 kernel: SELinux: policy capability network_peer_controls=1 May 16 04:59:51.330910 kernel: SELinux: policy capability open_perms=1 May 16 04:59:51.330919 kernel: SELinux: policy capability extended_socket_class=1 May 16 04:59:51.330929 kernel: SELinux: policy capability always_check_network=0 May 16 04:59:51.330940 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 04:59:51.330951 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 04:59:51.330960 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 04:59:51.330969 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 04:59:51.330977 kernel: SELinux: policy capability userspace_initial_context=0 May 16 04:59:51.330987 kernel: audit: type=1403 audit(1747371590.754:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 04:59:51.331001 systemd[1]: Successfully loaded SELinux policy in 40.783ms. May 16 04:59:51.331022 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 8.969ms. May 16 04:59:51.331033 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 16 04:59:51.331047 systemd[1]: Detected virtualization kvm. May 16 04:59:51.331057 systemd[1]: Detected architecture arm64. May 16 04:59:51.331066 systemd[1]: Detected first boot. May 16 04:59:51.331076 systemd[1]: Initializing machine ID from VM UUID. May 16 04:59:51.331086 zram_generator::config[1087]: No configuration found. May 16 04:59:51.331096 kernel: NET: Registered PF_VSOCK protocol family May 16 04:59:51.331107 systemd[1]: Populated /etc with preset unit settings. May 16 04:59:51.331117 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 16 04:59:51.331128 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 04:59:51.331138 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 16 04:59:51.331148 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 04:59:51.331158 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 16 04:59:51.331168 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 16 04:59:51.331178 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 16 04:59:51.331187 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 16 04:59:51.331199 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 16 04:59:51.331213 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 16 04:59:51.331223 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 16 04:59:51.331316 systemd[1]: Created slice user.slice - User and Session Slice. May 16 04:59:51.331330 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 16 04:59:51.331341 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 16 04:59:51.331351 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 16 04:59:51.331360 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 16 04:59:51.331373 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 16 04:59:51.331383 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 16 04:59:51.331393 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 16 04:59:51.331407 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 16 04:59:51.331417 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 16 04:59:51.331427 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 16 04:59:51.331437 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 16 04:59:51.331447 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 16 04:59:51.331458 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 16 04:59:51.331468 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 16 04:59:51.331479 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 16 04:59:51.331488 systemd[1]: Reached target slices.target - Slice Units. May 16 04:59:51.331498 systemd[1]: Reached target swap.target - Swaps. May 16 04:59:51.331508 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 16 04:59:51.331518 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 16 04:59:51.331529 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 16 04:59:51.331539 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 16 04:59:51.331550 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 16 04:59:51.331560 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 16 04:59:51.331570 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 16 04:59:51.331580 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 16 04:59:51.331590 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 16 04:59:51.331600 systemd[1]: Mounting media.mount - External Media Directory... May 16 04:59:51.331610 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 16 04:59:51.331620 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 16 04:59:51.331631 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 16 04:59:51.331642 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 04:59:51.331652 systemd[1]: Reached target machines.target - Containers. May 16 04:59:51.331662 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 16 04:59:51.331673 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 04:59:51.331683 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 16 04:59:51.331693 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 16 04:59:51.331703 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 04:59:51.331713 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 04:59:51.331723 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 04:59:51.331734 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 16 04:59:51.331744 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 04:59:51.331754 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 04:59:51.331764 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 04:59:51.331774 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 16 04:59:51.331784 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 04:59:51.331794 systemd[1]: Stopped systemd-fsck-usr.service. May 16 04:59:51.331805 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 04:59:51.331816 kernel: fuse: init (API version 7.41) May 16 04:59:51.331826 systemd[1]: Starting systemd-journald.service - Journal Service... May 16 04:59:51.331835 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 16 04:59:51.331845 kernel: ACPI: bus type drm_connector registered May 16 04:59:51.331854 kernel: loop: module loaded May 16 04:59:51.331864 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 16 04:59:51.331873 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 16 04:59:51.331883 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 16 04:59:51.331894 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 16 04:59:51.331905 systemd[1]: verity-setup.service: Deactivated successfully. May 16 04:59:51.331914 systemd[1]: Stopped verity-setup.service. May 16 04:59:51.331924 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 16 04:59:51.331955 systemd-journald[1159]: Collecting audit messages is disabled. May 16 04:59:51.331977 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 16 04:59:51.331988 systemd-journald[1159]: Journal started May 16 04:59:51.332008 systemd-journald[1159]: Runtime Journal (/run/log/journal/075b7df246914ef8923022593784f009) is 6M, max 48.5M, 42.4M free. May 16 04:59:51.335319 systemd[1]: Mounted media.mount - External Media Directory. May 16 04:59:51.335349 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 16 04:59:51.118008 systemd[1]: Queued start job for default target multi-user.target. May 16 04:59:51.135213 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 16 04:59:51.135615 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 04:59:51.338464 systemd[1]: Started systemd-journald.service - Journal Service. May 16 04:59:51.338943 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 16 04:59:51.340745 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 16 04:59:51.342106 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 16 04:59:51.344555 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 16 04:59:51.345872 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 04:59:51.346041 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 16 04:59:51.347397 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 04:59:51.347549 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 04:59:51.348951 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 04:59:51.349113 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 04:59:51.350454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 04:59:51.350603 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 04:59:51.351867 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 04:59:51.352031 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 16 04:59:51.353345 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 04:59:51.353501 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 04:59:51.354752 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 16 04:59:51.356018 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 16 04:59:51.357516 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 16 04:59:51.358839 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 16 04:59:51.370823 systemd[1]: Reached target network-pre.target - Preparation for Network. May 16 04:59:51.373169 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 16 04:59:51.375065 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 16 04:59:51.376212 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 04:59:51.376266 systemd[1]: Reached target local-fs.target - Local File Systems. May 16 04:59:51.378026 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 16 04:59:51.386010 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 16 04:59:51.387092 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 04:59:51.388486 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 16 04:59:51.390424 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 16 04:59:51.391634 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 04:59:51.393625 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 16 04:59:51.394786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 04:59:51.398417 systemd-journald[1159]: Time spent on flushing to /var/log/journal/075b7df246914ef8923022593784f009 is 16.108ms for 881 entries. May 16 04:59:51.398417 systemd-journald[1159]: System Journal (/var/log/journal/075b7df246914ef8923022593784f009) is 8M, max 195.6M, 187.6M free. May 16 04:59:51.435328 systemd-journald[1159]: Received client request to flush runtime journal. May 16 04:59:51.435397 kernel: loop0: detected capacity change from 0 to 107312 May 16 04:59:51.435417 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 04:59:51.435432 kernel: loop1: detected capacity change from 0 to 203944 May 16 04:59:51.395951 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 16 04:59:51.397833 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 16 04:59:51.403261 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 16 04:59:51.407324 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 16 04:59:51.411757 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 16 04:59:51.413268 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 16 04:59:51.421606 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 16 04:59:51.422932 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 16 04:59:51.424167 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 16 04:59:51.427013 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 16 04:59:51.440751 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 16 04:59:51.448777 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. May 16 04:59:51.448794 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. May 16 04:59:51.449606 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 16 04:59:51.452939 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 16 04:59:51.455868 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 16 04:59:51.460267 kernel: loop2: detected capacity change from 0 to 138376 May 16 04:59:51.482640 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 16 04:59:51.485031 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 16 04:59:51.490259 kernel: loop3: detected capacity change from 0 to 107312 May 16 04:59:51.496258 kernel: loop4: detected capacity change from 0 to 203944 May 16 04:59:51.509259 kernel: loop5: detected capacity change from 0 to 138376 May 16 04:59:51.515059 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 16 04:59:51.515531 (sd-merge)[1227]: Merged extensions into '/usr'. May 16 04:59:51.518791 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... May 16 04:59:51.518811 systemd[1]: Reloading... May 16 04:59:51.523342 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. May 16 04:59:51.523362 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. May 16 04:59:51.584262 zram_generator::config[1255]: No configuration found. May 16 04:59:51.654116 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 04:59:51.682193 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 04:59:51.715787 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 04:59:51.715977 systemd[1]: Reloading finished in 196 ms. May 16 04:59:51.739290 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 16 04:59:51.740633 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 16 04:59:51.741973 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 16 04:59:51.756560 systemd[1]: Starting ensure-sysext.service... May 16 04:59:51.758130 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 16 04:59:51.769426 systemd[1]: Reload requested from client PID 1290 ('systemctl') (unit ensure-sysext.service)... May 16 04:59:51.769442 systemd[1]: Reloading... May 16 04:59:51.774630 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 16 04:59:51.774669 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 16 04:59:51.774869 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 04:59:51.775018 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 16 04:59:51.775584 systemd-tmpfiles[1291]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 04:59:51.775771 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. May 16 04:59:51.775818 systemd-tmpfiles[1291]: ACLs are not supported, ignoring. May 16 04:59:51.778455 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. May 16 04:59:51.778467 systemd-tmpfiles[1291]: Skipping /boot May 16 04:59:51.786815 systemd-tmpfiles[1291]: Detected autofs mount point /boot during canonicalization of boot. May 16 04:59:51.786830 systemd-tmpfiles[1291]: Skipping /boot May 16 04:59:51.825253 zram_generator::config[1318]: No configuration found. May 16 04:59:51.887889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 04:59:51.949405 systemd[1]: Reloading finished in 179 ms. May 16 04:59:51.972668 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 16 04:59:51.978061 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 16 04:59:51.984306 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 04:59:51.986459 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 16 04:59:51.997907 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 16 04:59:52.001026 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 16 04:59:52.004298 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 16 04:59:52.008557 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 16 04:59:52.022864 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 16 04:59:52.025891 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 16 04:59:52.029767 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 04:59:52.032562 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 04:59:52.035004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 04:59:52.040070 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 04:59:52.041275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 04:59:52.041391 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 04:59:52.046794 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 16 04:59:52.049396 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 04:59:52.050765 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 04:59:52.052602 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 04:59:52.054268 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 04:59:52.055696 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 16 04:59:52.057218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 04:59:52.057438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 04:59:52.058174 systemd-udevd[1359]: Using default interface naming scheme 'v255'. May 16 04:59:52.064849 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 16 04:59:52.069169 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 04:59:52.069409 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 04:59:52.069512 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 04:59:52.070763 augenrules[1389]: No rules May 16 04:59:52.070950 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 16 04:59:52.072698 systemd[1]: audit-rules.service: Deactivated successfully. May 16 04:59:52.072874 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 04:59:52.079660 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 04:59:52.080639 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 16 04:59:52.083758 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 16 04:59:52.091069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 16 04:59:52.094977 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 16 04:59:52.098166 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 16 04:59:52.099475 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 16 04:59:52.099593 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 16 04:59:52.099700 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 04:59:52.100557 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 16 04:59:52.102395 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 16 04:59:52.103917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 04:59:52.104098 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 16 04:59:52.109323 systemd[1]: Finished ensure-sysext.service. May 16 04:59:52.117964 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 04:59:52.118501 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 16 04:59:52.119670 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 04:59:52.121317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 16 04:59:52.124811 augenrules[1397]: /sbin/augenrules: No change May 16 04:59:52.126202 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 16 04:59:52.127338 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 04:59:52.131201 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 16 04:59:52.137380 augenrules[1456]: No rules May 16 04:59:52.140071 systemd[1]: audit-rules.service: Deactivated successfully. May 16 04:59:52.140280 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 04:59:52.141446 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 04:59:52.141593 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 16 04:59:52.145149 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 16 04:59:52.162040 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 16 04:59:52.223948 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 16 04:59:52.228447 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 16 04:59:52.250606 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 16 04:59:52.255316 systemd-networkd[1449]: lo: Link UP May 16 04:59:52.255324 systemd-networkd[1449]: lo: Gained carrier May 16 04:59:52.256118 systemd-networkd[1449]: Enumeration completed May 16 04:59:52.256413 systemd[1]: Started systemd-networkd.service - Network Configuration. May 16 04:59:52.256535 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 04:59:52.256539 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 04:59:52.256961 systemd-networkd[1449]: eth0: Link UP May 16 04:59:52.257064 systemd-networkd[1449]: eth0: Gained carrier May 16 04:59:52.257081 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 16 04:59:52.259482 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 16 04:59:52.260785 systemd-resolved[1358]: Positive Trust Anchors: May 16 04:59:52.260805 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 04:59:52.260838 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 16 04:59:52.263397 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 16 04:59:52.265339 systemd-networkd[1449]: eth0: DHCPv4 address 10.0.0.27/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 04:59:52.268117 systemd-resolved[1358]: Defaulting to hostname 'linux'. May 16 04:59:52.272273 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 16 04:59:52.274031 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 16 04:59:52.275639 systemd[1]: Reached target network.target - Network. May 16 04:59:52.275847 systemd-timesyncd[1455]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 04:59:52.276187 systemd-timesyncd[1455]: Initial clock synchronization to Fri 2025-05-16 04:59:52.230063 UTC. May 16 04:59:52.276743 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 16 04:59:52.279141 systemd[1]: Reached target sysinit.target - System Initialization. May 16 04:59:52.280333 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 16 04:59:52.281608 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 16 04:59:52.284402 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 16 04:59:52.285476 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 04:59:52.285509 systemd[1]: Reached target paths.target - Path Units. May 16 04:59:52.286288 systemd[1]: Reached target time-set.target - System Time Set. May 16 04:59:52.287264 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 16 04:59:52.288201 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 16 04:59:52.289265 systemd[1]: Reached target timers.target - Timer Units. May 16 04:59:52.290909 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 16 04:59:52.293066 systemd[1]: Starting docker.socket - Docker Socket for the API... May 16 04:59:52.295971 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 16 04:59:52.298224 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 16 04:59:52.299285 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 16 04:59:52.303882 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 16 04:59:52.305165 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 16 04:59:52.310270 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 16 04:59:52.311481 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 16 04:59:52.312938 systemd[1]: Reached target sockets.target - Socket Units. May 16 04:59:52.313793 systemd[1]: Reached target basic.target - Basic System. May 16 04:59:52.314661 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 16 04:59:52.314692 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 16 04:59:52.315646 systemd[1]: Starting containerd.service - containerd container runtime... May 16 04:59:52.318589 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 16 04:59:52.322461 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 16 04:59:52.325539 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 16 04:59:52.330071 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 16 04:59:52.331438 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 16 04:59:52.333073 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 16 04:59:52.336469 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 16 04:59:52.340914 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 16 04:59:52.344188 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 16 04:59:52.346980 jq[1498]: false May 16 04:59:52.350056 systemd[1]: Starting systemd-logind.service - User Login Management... May 16 04:59:52.351917 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 04:59:52.352400 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 04:59:52.353441 systemd[1]: Starting update-engine.service - Update Engine... May 16 04:59:52.355367 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 16 04:59:52.359272 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 16 04:59:52.360777 extend-filesystems[1499]: Found loop3 May 16 04:59:52.360777 extend-filesystems[1499]: Found loop4 May 16 04:59:52.360777 extend-filesystems[1499]: Found loop5 May 16 04:59:52.360777 extend-filesystems[1499]: Found vda May 16 04:59:52.360777 extend-filesystems[1499]: Found vda1 May 16 04:59:52.360777 extend-filesystems[1499]: Found vda2 May 16 04:59:52.360777 extend-filesystems[1499]: Found vda3 May 16 04:59:52.360777 extend-filesystems[1499]: Found usr May 16 04:59:52.360777 extend-filesystems[1499]: Found vda4 May 16 04:59:52.360777 extend-filesystems[1499]: Found vda6 May 16 04:59:52.360777 extend-filesystems[1499]: Found vda7 May 16 04:59:52.360777 extend-filesystems[1499]: Found vda9 May 16 04:59:52.360777 extend-filesystems[1499]: Checking size of /dev/vda9 May 16 04:59:52.360691 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 04:59:52.365594 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 16 04:59:52.365925 systemd[1]: motdgen.service: Deactivated successfully. May 16 04:59:52.366080 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 16 04:59:52.374271 jq[1509]: true May 16 04:59:52.368618 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 04:59:52.370275 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 16 04:59:52.377455 extend-filesystems[1499]: Resized partition /dev/vda9 May 16 04:59:52.390950 extend-filesystems[1522]: resize2fs 1.47.2 (1-Jan-2025) May 16 04:59:52.393389 (ntainerd)[1519]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 16 04:59:52.396186 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 16 04:59:52.404698 jq[1518]: true May 16 04:59:52.406721 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 04:59:52.439319 update_engine[1507]: I20250516 04:59:52.438452 1507 main.cc:92] Flatcar Update Engine starting May 16 04:59:52.448820 tar[1516]: linux-arm64/helm May 16 04:59:52.460486 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 04:59:52.464885 dbus-daemon[1496]: [system] SELinux support is enabled May 16 04:59:52.465054 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 16 04:59:52.485170 update_engine[1507]: I20250516 04:59:52.474448 1507 update_check_scheduler.cc:74] Next update check in 3m58s May 16 04:59:52.470321 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 04:59:52.470346 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 16 04:59:52.471990 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 04:59:52.472006 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 16 04:59:52.473896 systemd[1]: Started update-engine.service - Update Engine. May 16 04:59:52.482537 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 16 04:59:52.488116 extend-filesystems[1522]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 04:59:52.488116 extend-filesystems[1522]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 04:59:52.488116 extend-filesystems[1522]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 04:59:52.494542 bash[1554]: Updated "/home/core/.ssh/authorized_keys" May 16 04:59:52.494701 extend-filesystems[1499]: Resized filesystem in /dev/vda9 May 16 04:59:52.498885 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 04:59:52.499098 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 16 04:59:52.505806 systemd-logind[1505]: Watching system buttons on /dev/input/event0 (Power Button) May 16 04:59:52.506100 systemd-logind[1505]: New seat seat0. May 16 04:59:52.533137 systemd[1]: Started systemd-logind.service - User Login Management. May 16 04:59:52.537921 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 16 04:59:52.539288 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 16 04:59:52.546767 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 16 04:59:52.554755 locksmithd[1557]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 04:59:52.641525 containerd[1519]: time="2025-05-16T04:59:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 16 04:59:52.643991 containerd[1519]: time="2025-05-16T04:59:52.643956440Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 16 04:59:52.654395 containerd[1519]: time="2025-05-16T04:59:52.654128800Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.84µs" May 16 04:59:52.654395 containerd[1519]: time="2025-05-16T04:59:52.654202760Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 16 04:59:52.654395 containerd[1519]: time="2025-05-16T04:59:52.654220640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 16 04:59:52.654589 containerd[1519]: time="2025-05-16T04:59:52.654570240Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 16 04:59:52.654647 containerd[1519]: time="2025-05-16T04:59:52.654634280Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 16 04:59:52.654757 containerd[1519]: time="2025-05-16T04:59:52.654742640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 04:59:52.654913 containerd[1519]: time="2025-05-16T04:59:52.654894280Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 16 04:59:52.654980 containerd[1519]: time="2025-05-16T04:59:52.654965000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 04:59:52.655383 containerd[1519]: time="2025-05-16T04:59:52.655335360Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 16 04:59:52.655383 containerd[1519]: time="2025-05-16T04:59:52.655366360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 04:59:52.655383 containerd[1519]: time="2025-05-16T04:59:52.655380240Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 16 04:59:52.655383 containerd[1519]: time="2025-05-16T04:59:52.655388120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 16 04:59:52.655535 containerd[1519]: time="2025-05-16T04:59:52.655485120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 16 04:59:52.655685 containerd[1519]: time="2025-05-16T04:59:52.655664760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 04:59:52.655712 containerd[1519]: time="2025-05-16T04:59:52.655698440Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 16 04:59:52.655712 containerd[1519]: time="2025-05-16T04:59:52.655709520Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 16 04:59:52.655755 containerd[1519]: time="2025-05-16T04:59:52.655739280Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 16 04:59:52.655942 containerd[1519]: time="2025-05-16T04:59:52.655923000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 16 04:59:52.656026 containerd[1519]: time="2025-05-16T04:59:52.655984720Z" level=info msg="metadata content store policy set" policy=shared May 16 04:59:52.658812 containerd[1519]: time="2025-05-16T04:59:52.658786000Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 16 04:59:52.658885 containerd[1519]: time="2025-05-16T04:59:52.658839760Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 16 04:59:52.658885 containerd[1519]: time="2025-05-16T04:59:52.658856320Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 16 04:59:52.658885 containerd[1519]: time="2025-05-16T04:59:52.658868040Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 16 04:59:52.658885 containerd[1519]: time="2025-05-16T04:59:52.658879600Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 16 04:59:52.658885 containerd[1519]: time="2025-05-16T04:59:52.658891160Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 16 04:59:52.658885 containerd[1519]: time="2025-05-16T04:59:52.658902360Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 16 04:59:52.658885 containerd[1519]: time="2025-05-16T04:59:52.658917200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 16 04:59:52.659044 containerd[1519]: time="2025-05-16T04:59:52.658927160Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 16 04:59:52.659044 containerd[1519]: time="2025-05-16T04:59:52.658936960Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 16 04:59:52.659044 containerd[1519]: time="2025-05-16T04:59:52.658945560Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 16 04:59:52.659044 containerd[1519]: time="2025-05-16T04:59:52.658956760Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 16 04:59:52.659107 containerd[1519]: time="2025-05-16T04:59:52.659052800Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 16 04:59:52.659107 containerd[1519]: time="2025-05-16T04:59:52.659072520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 16 04:59:52.659107 containerd[1519]: time="2025-05-16T04:59:52.659087160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 16 04:59:52.659107 containerd[1519]: time="2025-05-16T04:59:52.659097560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 16 04:59:52.659107 containerd[1519]: time="2025-05-16T04:59:52.659106760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 16 04:59:52.659185 containerd[1519]: time="2025-05-16T04:59:52.659117360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 16 04:59:52.659185 containerd[1519]: time="2025-05-16T04:59:52.659127560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 16 04:59:52.659185 containerd[1519]: time="2025-05-16T04:59:52.659136760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 16 04:59:52.659185 containerd[1519]: time="2025-05-16T04:59:52.659147160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 16 04:59:52.659185 containerd[1519]: time="2025-05-16T04:59:52.659157360Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 16 04:59:52.659185 containerd[1519]: time="2025-05-16T04:59:52.659166920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 16 04:59:52.659546 containerd[1519]: time="2025-05-16T04:59:52.659373040Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 16 04:59:52.659546 containerd[1519]: time="2025-05-16T04:59:52.659397080Z" level=info msg="Start snapshots syncer" May 16 04:59:52.659546 containerd[1519]: time="2025-05-16T04:59:52.659425440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 16 04:59:52.659770 containerd[1519]: time="2025-05-16T04:59:52.659617240Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 16 04:59:52.659971 containerd[1519]: time="2025-05-16T04:59:52.659781120Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 16 04:59:52.660317 containerd[1519]: time="2025-05-16T04:59:52.660030840Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 16 04:59:52.660317 containerd[1519]: time="2025-05-16T04:59:52.660148280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 16 04:59:52.660317 containerd[1519]: time="2025-05-16T04:59:52.660176440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 16 04:59:52.660317 containerd[1519]: time="2025-05-16T04:59:52.660194200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 16 04:59:52.660317 containerd[1519]: time="2025-05-16T04:59:52.660206400Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 16 04:59:52.660317 containerd[1519]: time="2025-05-16T04:59:52.660222000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 16 04:59:52.660317 containerd[1519]: time="2025-05-16T04:59:52.660264120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 16 04:59:52.660317 containerd[1519]: time="2025-05-16T04:59:52.660277840Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 16 04:59:52.660491 containerd[1519]: time="2025-05-16T04:59:52.660372000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 16 04:59:52.660491 containerd[1519]: time="2025-05-16T04:59:52.660389320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 16 04:59:52.660491 containerd[1519]: time="2025-05-16T04:59:52.660403440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 16 04:59:52.660491 containerd[1519]: time="2025-05-16T04:59:52.660479720Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 04:59:52.660555 containerd[1519]: time="2025-05-16T04:59:52.660498160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 16 04:59:52.660555 containerd[1519]: time="2025-05-16T04:59:52.660510320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 04:59:52.660555 containerd[1519]: time="2025-05-16T04:59:52.660523640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 16 04:59:52.660555 containerd[1519]: time="2025-05-16T04:59:52.660532480Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 16 04:59:52.660555 containerd[1519]: time="2025-05-16T04:59:52.660545800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 16 04:59:52.660635 containerd[1519]: time="2025-05-16T04:59:52.660559640Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 16 04:59:52.660653 containerd[1519]: time="2025-05-16T04:59:52.660633360Z" level=info msg="runtime interface created" May 16 04:59:52.661253 containerd[1519]: time="2025-05-16T04:59:52.660684680Z" level=info msg="created NRI interface" May 16 04:59:52.661253 containerd[1519]: time="2025-05-16T04:59:52.660703640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 16 04:59:52.661253 containerd[1519]: time="2025-05-16T04:59:52.660719520Z" level=info msg="Connect containerd service" May 16 04:59:52.661253 containerd[1519]: time="2025-05-16T04:59:52.660760600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 16 04:59:52.661625 containerd[1519]: time="2025-05-16T04:59:52.661596640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 04:59:52.719221 sshd_keygen[1533]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 04:59:52.738041 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 16 04:59:52.742497 systemd[1]: Starting issuegen.service - Generate /run/issue... May 16 04:59:52.760202 systemd[1]: issuegen.service: Deactivated successfully. May 16 04:59:52.760588 systemd[1]: Finished issuegen.service - Generate /run/issue. May 16 04:59:52.763819 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 16 04:59:52.766322 containerd[1519]: time="2025-05-16T04:59:52.766266760Z" level=info msg="Start subscribing containerd event" May 16 04:59:52.766422 containerd[1519]: time="2025-05-16T04:59:52.766349080Z" level=info msg="Start recovering state" May 16 04:59:52.766447 containerd[1519]: time="2025-05-16T04:59:52.766432720Z" level=info msg="Start event monitor" May 16 04:59:52.766447 containerd[1519]: time="2025-05-16T04:59:52.766449000Z" level=info msg="Start cni network conf syncer for default" May 16 04:59:52.766447 containerd[1519]: time="2025-05-16T04:59:52.766457400Z" level=info msg="Start streaming server" May 16 04:59:52.766447 containerd[1519]: time="2025-05-16T04:59:52.766466400Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 16 04:59:52.766447 containerd[1519]: time="2025-05-16T04:59:52.766473560Z" level=info msg="runtime interface starting up..." May 16 04:59:52.766447 containerd[1519]: time="2025-05-16T04:59:52.766479680Z" level=info msg="starting plugins..." May 16 04:59:52.766793 containerd[1519]: time="2025-05-16T04:59:52.766492040Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 16 04:59:52.766907 containerd[1519]: time="2025-05-16T04:59:52.766875680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 04:59:52.766938 containerd[1519]: time="2025-05-16T04:59:52.766922640Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 04:59:52.767004 containerd[1519]: time="2025-05-16T04:59:52.766985120Z" level=info msg="containerd successfully booted in 0.125806s" May 16 04:59:52.767073 systemd[1]: Started containerd.service - containerd container runtime. May 16 04:59:52.779869 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 16 04:59:52.782613 systemd[1]: Started getty@tty1.service - Getty on tty1. May 16 04:59:52.784515 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 16 04:59:52.785795 systemd[1]: Reached target getty.target - Login Prompts. May 16 04:59:52.820571 tar[1516]: linux-arm64/LICENSE May 16 04:59:52.820668 tar[1516]: linux-arm64/README.md May 16 04:59:52.845473 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 16 04:59:53.531345 systemd-networkd[1449]: eth0: Gained IPv6LL May 16 04:59:53.533105 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 16 04:59:53.535192 systemd[1]: Reached target network-online.target - Network is Online. May 16 04:59:53.537438 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 16 04:59:53.539578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 04:59:53.551635 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 16 04:59:53.564066 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 04:59:53.564315 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 16 04:59:53.565862 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 16 04:59:53.572126 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 16 04:59:54.073703 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 04:59:54.075069 systemd[1]: Reached target multi-user.target - Multi-User System. May 16 04:59:54.076888 (kubelet)[1632]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 04:59:54.076946 systemd[1]: Startup finished in 2.120s (kernel) + 5.133s (initrd) + 3.370s (userspace) = 10.624s. May 16 04:59:54.481811 kubelet[1632]: E0516 04:59:54.481710 1632 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 04:59:54.484253 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 04:59:54.484402 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 04:59:54.484753 systemd[1]: kubelet.service: Consumed 799ms CPU time, 256.5M memory peak. May 16 04:59:58.975482 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 16 04:59:58.977055 systemd[1]: Started sshd@0-10.0.0.27:22-10.0.0.1:42278.service - OpenSSH per-connection server daemon (10.0.0.1:42278). May 16 04:59:59.054442 sshd[1645]: Accepted publickey for core from 10.0.0.1 port 42278 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 04:59:59.056279 sshd-session[1645]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 04:59:59.070215 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 16 04:59:59.071177 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 16 04:59:59.078583 systemd-logind[1505]: New session 1 of user core. May 16 04:59:59.095288 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 16 04:59:59.097941 systemd[1]: Starting user@500.service - User Manager for UID 500... May 16 04:59:59.109176 (systemd)[1649]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 04:59:59.111258 systemd-logind[1505]: New session c1 of user core. May 16 04:59:59.231997 systemd[1649]: Queued start job for default target default.target. May 16 04:59:59.249262 systemd[1649]: Created slice app.slice - User Application Slice. May 16 04:59:59.249290 systemd[1649]: Reached target paths.target - Paths. May 16 04:59:59.249328 systemd[1649]: Reached target timers.target - Timers. May 16 04:59:59.250616 systemd[1649]: Starting dbus.socket - D-Bus User Message Bus Socket... May 16 04:59:59.259626 systemd[1649]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 16 04:59:59.259680 systemd[1649]: Reached target sockets.target - Sockets. May 16 04:59:59.259716 systemd[1649]: Reached target basic.target - Basic System. May 16 04:59:59.259743 systemd[1649]: Reached target default.target - Main User Target. May 16 04:59:59.259769 systemd[1649]: Startup finished in 143ms. May 16 04:59:59.259943 systemd[1]: Started user@500.service - User Manager for UID 500. May 16 04:59:59.261287 systemd[1]: Started session-1.scope - Session 1 of User core. May 16 04:59:59.318526 systemd[1]: Started sshd@1-10.0.0.27:22-10.0.0.1:42294.service - OpenSSH per-connection server daemon (10.0.0.1:42294). May 16 04:59:59.368823 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 42294 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 04:59:59.369969 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 04:59:59.373919 systemd-logind[1505]: New session 2 of user core. May 16 04:59:59.385427 systemd[1]: Started session-2.scope - Session 2 of User core. May 16 04:59:59.435261 sshd[1662]: Connection closed by 10.0.0.1 port 42294 May 16 04:59:59.435716 sshd-session[1660]: pam_unix(sshd:session): session closed for user core May 16 04:59:59.448183 systemd[1]: sshd@1-10.0.0.27:22-10.0.0.1:42294.service: Deactivated successfully. May 16 04:59:59.449718 systemd[1]: session-2.scope: Deactivated successfully. May 16 04:59:59.450404 systemd-logind[1505]: Session 2 logged out. Waiting for processes to exit. May 16 04:59:59.453414 systemd[1]: Started sshd@2-10.0.0.27:22-10.0.0.1:42304.service - OpenSSH per-connection server daemon (10.0.0.1:42304). May 16 04:59:59.455775 systemd-logind[1505]: Removed session 2. May 16 04:59:59.499793 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 42304 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 04:59:59.500982 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 04:59:59.505348 systemd-logind[1505]: New session 3 of user core. May 16 04:59:59.514445 systemd[1]: Started session-3.scope - Session 3 of User core. May 16 04:59:59.561951 sshd[1670]: Connection closed by 10.0.0.1 port 42304 May 16 04:59:59.562330 sshd-session[1668]: pam_unix(sshd:session): session closed for user core May 16 04:59:59.571642 systemd[1]: sshd@2-10.0.0.27:22-10.0.0.1:42304.service: Deactivated successfully. May 16 04:59:59.574562 systemd[1]: session-3.scope: Deactivated successfully. May 16 04:59:59.575225 systemd-logind[1505]: Session 3 logged out. Waiting for processes to exit. May 16 04:59:59.577851 systemd[1]: Started sshd@3-10.0.0.27:22-10.0.0.1:42316.service - OpenSSH per-connection server daemon (10.0.0.1:42316). May 16 04:59:59.578522 systemd-logind[1505]: Removed session 3. May 16 04:59:59.628771 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 42316 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 04:59:59.630082 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 04:59:59.634214 systemd-logind[1505]: New session 4 of user core. May 16 04:59:59.641405 systemd[1]: Started session-4.scope - Session 4 of User core. May 16 04:59:59.696397 sshd[1678]: Connection closed by 10.0.0.1 port 42316 May 16 04:59:59.696749 sshd-session[1676]: pam_unix(sshd:session): session closed for user core May 16 04:59:59.706247 systemd[1]: sshd@3-10.0.0.27:22-10.0.0.1:42316.service: Deactivated successfully. May 16 04:59:59.707793 systemd[1]: session-4.scope: Deactivated successfully. May 16 04:59:59.708467 systemd-logind[1505]: Session 4 logged out. Waiting for processes to exit. May 16 04:59:59.711005 systemd[1]: Started sshd@4-10.0.0.27:22-10.0.0.1:42328.service - OpenSSH per-connection server daemon (10.0.0.1:42328). May 16 04:59:59.711833 systemd-logind[1505]: Removed session 4. May 16 04:59:59.753070 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 42328 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 04:59:59.754342 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 04:59:59.758291 systemd-logind[1505]: New session 5 of user core. May 16 04:59:59.770413 systemd[1]: Started session-5.scope - Session 5 of User core. May 16 04:59:59.843431 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 16 04:59:59.844059 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 04:59:59.867979 sudo[1687]: pam_unix(sudo:session): session closed for user root May 16 04:59:59.870204 sshd[1686]: Connection closed by 10.0.0.1 port 42328 May 16 04:59:59.869998 sshd-session[1684]: pam_unix(sshd:session): session closed for user core May 16 04:59:59.884642 systemd[1]: sshd@4-10.0.0.27:22-10.0.0.1:42328.service: Deactivated successfully. May 16 04:59:59.886445 systemd[1]: session-5.scope: Deactivated successfully. May 16 04:59:59.887987 systemd-logind[1505]: Session 5 logged out. Waiting for processes to exit. May 16 04:59:59.890456 systemd[1]: Started sshd@5-10.0.0.27:22-10.0.0.1:42338.service - OpenSSH per-connection server daemon (10.0.0.1:42338). May 16 04:59:59.891518 systemd-logind[1505]: Removed session 5. May 16 04:59:59.938796 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 42338 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 04:59:59.940345 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 04:59:59.945366 systemd-logind[1505]: New session 6 of user core. May 16 04:59:59.954441 systemd[1]: Started session-6.scope - Session 6 of User core. May 16 05:00:00.006400 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 16 05:00:00.007081 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:00:00.087090 sudo[1697]: pam_unix(sudo:session): session closed for user root May 16 05:00:00.092176 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 16 05:00:00.092477 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:00:00.100908 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 16 05:00:00.145855 augenrules[1719]: No rules May 16 05:00:00.147151 systemd[1]: audit-rules.service: Deactivated successfully. May 16 05:00:00.147413 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 16 05:00:00.148326 sudo[1696]: pam_unix(sudo:session): session closed for user root May 16 05:00:00.150080 sshd[1695]: Connection closed by 10.0.0.1 port 42338 May 16 05:00:00.149945 sshd-session[1693]: pam_unix(sshd:session): session closed for user core May 16 05:00:00.161357 systemd[1]: sshd@5-10.0.0.27:22-10.0.0.1:42338.service: Deactivated successfully. May 16 05:00:00.162816 systemd[1]: session-6.scope: Deactivated successfully. May 16 05:00:00.163554 systemd-logind[1505]: Session 6 logged out. Waiting for processes to exit. May 16 05:00:00.165863 systemd[1]: Started sshd@6-10.0.0.27:22-10.0.0.1:42342.service - OpenSSH per-connection server daemon (10.0.0.1:42342). May 16 05:00:00.166541 systemd-logind[1505]: Removed session 6. May 16 05:00:00.219140 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 42342 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:00:00.220509 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:00:00.224659 systemd-logind[1505]: New session 7 of user core. May 16 05:00:00.236423 systemd[1]: Started session-7.scope - Session 7 of User core. May 16 05:00:00.287059 sudo[1731]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 05:00:00.287692 sudo[1731]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 16 05:00:00.819464 systemd[1]: Starting docker.service - Docker Application Container Engine... May 16 05:00:00.834562 (dockerd)[1751]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 16 05:00:01.192021 dockerd[1751]: time="2025-05-16T05:00:01.191895372Z" level=info msg="Starting up" May 16 05:00:01.193937 dockerd[1751]: time="2025-05-16T05:00:01.193902464Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 16 05:00:01.250400 dockerd[1751]: time="2025-05-16T05:00:01.250348905Z" level=info msg="Loading containers: start." May 16 05:00:01.262271 kernel: Initializing XFRM netlink socket May 16 05:00:01.448922 systemd-networkd[1449]: docker0: Link UP May 16 05:00:01.452032 dockerd[1751]: time="2025-05-16T05:00:01.451989423Z" level=info msg="Loading containers: done." May 16 05:00:01.465159 dockerd[1751]: time="2025-05-16T05:00:01.465110015Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 05:00:01.465287 dockerd[1751]: time="2025-05-16T05:00:01.465185766Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 16 05:00:01.465315 dockerd[1751]: time="2025-05-16T05:00:01.465297615Z" level=info msg="Initializing buildkit" May 16 05:00:01.485959 dockerd[1751]: time="2025-05-16T05:00:01.485916442Z" level=info msg="Completed buildkit initialization" May 16 05:00:01.492187 dockerd[1751]: time="2025-05-16T05:00:01.492150156Z" level=info msg="Daemon has completed initialization" May 16 05:00:01.492245 dockerd[1751]: time="2025-05-16T05:00:01.492199193Z" level=info msg="API listen on /run/docker.sock" May 16 05:00:01.492393 systemd[1]: Started docker.service - Docker Application Container Engine. May 16 05:00:02.596248 containerd[1519]: time="2025-05-16T05:00:02.596191372Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 16 05:00:03.239171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1594768707.mount: Deactivated successfully. May 16 05:00:04.395843 containerd[1519]: time="2025-05-16T05:00:04.395793875Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:04.396295 containerd[1519]: time="2025-05-16T05:00:04.396267489Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25651976" May 16 05:00:04.397149 containerd[1519]: time="2025-05-16T05:00:04.397100117Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:04.399941 containerd[1519]: time="2025-05-16T05:00:04.399878529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:04.400762 containerd[1519]: time="2025-05-16T05:00:04.400728813Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 1.804495626s" May 16 05:00:04.400819 containerd[1519]: time="2025-05-16T05:00:04.400763804Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 16 05:00:04.403775 containerd[1519]: time="2025-05-16T05:00:04.403726076Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 16 05:00:04.734773 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 05:00:04.736211 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:00:04.870721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:00:04.873777 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 05:00:04.916165 kubelet[2022]: E0516 05:00:04.916103 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 05:00:04.919058 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 05:00:04.919186 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 05:00:04.921325 systemd[1]: kubelet.service: Consumed 146ms CPU time, 108.6M memory peak. May 16 05:00:06.025125 containerd[1519]: time="2025-05-16T05:00:06.024949863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:06.025994 containerd[1519]: time="2025-05-16T05:00:06.025961972Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459530" May 16 05:00:06.027002 containerd[1519]: time="2025-05-16T05:00:06.026946874Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:06.029748 containerd[1519]: time="2025-05-16T05:00:06.029712935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:06.030599 containerd[1519]: time="2025-05-16T05:00:06.030565601Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.626787516s" May 16 05:00:06.030703 containerd[1519]: time="2025-05-16T05:00:06.030672629Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 16 05:00:06.031347 containerd[1519]: time="2025-05-16T05:00:06.031270609Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 16 05:00:07.433522 containerd[1519]: time="2025-05-16T05:00:07.433469912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:07.434745 containerd[1519]: time="2025-05-16T05:00:07.434712791Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125281" May 16 05:00:07.436253 containerd[1519]: time="2025-05-16T05:00:07.435863618Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:07.438586 containerd[1519]: time="2025-05-16T05:00:07.438552302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:07.439544 containerd[1519]: time="2025-05-16T05:00:07.439514187Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.408216051s" May 16 05:00:07.439637 containerd[1519]: time="2025-05-16T05:00:07.439622622Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 16 05:00:07.440310 containerd[1519]: time="2025-05-16T05:00:07.440283176Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 05:00:08.444417 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4207597272.mount: Deactivated successfully. May 16 05:00:08.655540 containerd[1519]: time="2025-05-16T05:00:08.655493922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:08.656014 containerd[1519]: time="2025-05-16T05:00:08.655976838Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871377" May 16 05:00:08.656881 containerd[1519]: time="2025-05-16T05:00:08.656830311Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:08.658872 containerd[1519]: time="2025-05-16T05:00:08.658846880Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:08.659355 containerd[1519]: time="2025-05-16T05:00:08.659321604Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 1.219004267s" May 16 05:00:08.659418 containerd[1519]: time="2025-05-16T05:00:08.659354409Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 16 05:00:08.659772 containerd[1519]: time="2025-05-16T05:00:08.659739510Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 05:00:09.241644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount496989844.mount: Deactivated successfully. May 16 05:00:09.984453 containerd[1519]: time="2025-05-16T05:00:09.984408102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:09.985581 containerd[1519]: time="2025-05-16T05:00:09.985531398Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 16 05:00:09.987047 containerd[1519]: time="2025-05-16T05:00:09.987007135Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:09.990048 containerd[1519]: time="2025-05-16T05:00:09.989994812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:09.991027 containerd[1519]: time="2025-05-16T05:00:09.990997471Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.331219482s" May 16 05:00:09.991080 containerd[1519]: time="2025-05-16T05:00:09.991031836Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 16 05:00:09.991548 containerd[1519]: time="2025-05-16T05:00:09.991532127Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 05:00:10.418726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1017174841.mount: Deactivated successfully. May 16 05:00:10.423043 containerd[1519]: time="2025-05-16T05:00:10.422989051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:00:10.424131 containerd[1519]: time="2025-05-16T05:00:10.424090799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 16 05:00:10.425153 containerd[1519]: time="2025-05-16T05:00:10.425097358Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:00:10.427257 containerd[1519]: time="2025-05-16T05:00:10.426686081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 16 05:00:10.427460 containerd[1519]: time="2025-05-16T05:00:10.427434487Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 435.876306ms" May 16 05:00:10.427529 containerd[1519]: time="2025-05-16T05:00:10.427514650Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 16 05:00:10.428021 containerd[1519]: time="2025-05-16T05:00:10.427998668Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 16 05:00:10.960547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1317844726.mount: Deactivated successfully. May 16 05:00:13.277903 containerd[1519]: time="2025-05-16T05:00:13.277848217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:13.278449 containerd[1519]: time="2025-05-16T05:00:13.278414612Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 16 05:00:13.279253 containerd[1519]: time="2025-05-16T05:00:13.279188643Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:13.281799 containerd[1519]: time="2025-05-16T05:00:13.281727446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:13.282966 containerd[1519]: time="2025-05-16T05:00:13.282879260Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.854851778s" May 16 05:00:13.282966 containerd[1519]: time="2025-05-16T05:00:13.282907518Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 16 05:00:15.169642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 16 05:00:15.171095 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:00:15.331299 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:00:15.334463 (kubelet)[2188]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 16 05:00:15.365731 kubelet[2188]: E0516 05:00:15.365686 2188 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 05:00:15.368272 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 05:00:15.368488 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 05:00:15.369012 systemd[1]: kubelet.service: Consumed 125ms CPU time, 104.7M memory peak. May 16 05:00:18.438444 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:00:18.438735 systemd[1]: kubelet.service: Consumed 125ms CPU time, 104.7M memory peak. May 16 05:00:18.441413 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:00:18.463024 systemd[1]: Reload requested from client PID 2202 ('systemctl') (unit session-7.scope)... May 16 05:00:18.463040 systemd[1]: Reloading... May 16 05:00:18.532333 zram_generator::config[2249]: No configuration found. May 16 05:00:18.651796 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:00:18.737796 systemd[1]: Reloading finished in 274 ms. May 16 05:00:18.800735 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 16 05:00:18.800819 systemd[1]: kubelet.service: Failed with result 'signal'. May 16 05:00:18.802310 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:00:18.802370 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.1M memory peak. May 16 05:00:18.804002 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:00:18.937024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:00:18.940369 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 05:00:18.973014 kubelet[2291]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:00:18.973014 kubelet[2291]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 05:00:18.973014 kubelet[2291]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:00:18.973353 kubelet[2291]: I0516 05:00:18.973057 2291 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 05:00:19.961558 kubelet[2291]: I0516 05:00:19.961505 2291 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 05:00:19.961558 kubelet[2291]: I0516 05:00:19.961543 2291 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 05:00:19.961836 kubelet[2291]: I0516 05:00:19.961810 2291 server.go:934] "Client rotation is on, will bootstrap in background" May 16 05:00:19.996589 kubelet[2291]: E0516 05:00:19.996549 2291 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.27:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 16 05:00:19.999024 kubelet[2291]: I0516 05:00:19.998995 2291 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 05:00:20.007415 kubelet[2291]: I0516 05:00:20.007390 2291 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 05:00:20.010830 kubelet[2291]: I0516 05:00:20.010796 2291 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 05:00:20.011040 kubelet[2291]: I0516 05:00:20.011020 2291 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 05:00:20.011165 kubelet[2291]: I0516 05:00:20.011133 2291 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 05:00:20.011340 kubelet[2291]: I0516 05:00:20.011159 2291 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 05:00:20.011426 kubelet[2291]: I0516 05:00:20.011402 2291 topology_manager.go:138] "Creating topology manager with none policy" May 16 05:00:20.011426 kubelet[2291]: I0516 05:00:20.011413 2291 container_manager_linux.go:300] "Creating device plugin manager" May 16 05:00:20.011614 kubelet[2291]: I0516 05:00:20.011590 2291 state_mem.go:36] "Initialized new in-memory state store" May 16 05:00:20.013481 kubelet[2291]: I0516 05:00:20.013449 2291 kubelet.go:408] "Attempting to sync node with API server" May 16 05:00:20.013481 kubelet[2291]: I0516 05:00:20.013479 2291 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 05:00:20.013543 kubelet[2291]: I0516 05:00:20.013502 2291 kubelet.go:314] "Adding apiserver pod source" May 16 05:00:20.013597 kubelet[2291]: I0516 05:00:20.013578 2291 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 05:00:20.016968 kubelet[2291]: W0516 05:00:20.016908 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 16 05:00:20.017000 kubelet[2291]: E0516 05:00:20.016971 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 16 05:00:20.017000 kubelet[2291]: W0516 05:00:20.016968 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 16 05:00:20.017045 kubelet[2291]: E0516 05:00:20.017011 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.27:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 16 05:00:20.021137 kubelet[2291]: I0516 05:00:20.021115 2291 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 05:00:20.021795 kubelet[2291]: I0516 05:00:20.021772 2291 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 05:00:20.021986 kubelet[2291]: W0516 05:00:20.021969 2291 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 05:00:20.023619 kubelet[2291]: I0516 05:00:20.023590 2291 server.go:1274] "Started kubelet" May 16 05:00:20.023922 kubelet[2291]: I0516 05:00:20.023885 2291 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 05:00:20.024149 kubelet[2291]: I0516 05:00:20.024101 2291 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 05:00:20.024392 kubelet[2291]: I0516 05:00:20.024368 2291 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 05:00:20.031142 kubelet[2291]: I0516 05:00:20.028746 2291 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 05:00:20.031142 kubelet[2291]: I0516 05:00:20.030556 2291 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 05:00:20.034837 kubelet[2291]: I0516 05:00:20.032442 2291 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 05:00:20.034837 kubelet[2291]: I0516 05:00:20.032548 2291 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 05:00:20.034837 kubelet[2291]: I0516 05:00:20.032605 2291 reconciler.go:26] "Reconciler: start to sync state" May 16 05:00:20.034837 kubelet[2291]: W0516 05:00:20.033056 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 16 05:00:20.034837 kubelet[2291]: E0516 05:00:20.033104 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 16 05:00:20.034837 kubelet[2291]: E0516 05:00:20.033714 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:00:20.034837 kubelet[2291]: I0516 05:00:20.033791 2291 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 05:00:20.034837 kubelet[2291]: E0516 05:00:20.034007 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="200ms" May 16 05:00:20.035598 kubelet[2291]: E0516 05:00:20.031547 2291 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.27:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.27:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fe93afe1a834f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 05:00:20.023567183 +0000 UTC m=+1.080200825,LastTimestamp:2025-05-16 05:00:20.023567183 +0000 UTC m=+1.080200825,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 05:00:20.036107 kubelet[2291]: I0516 05:00:20.036081 2291 server.go:449] "Adding debug handlers to kubelet server" May 16 05:00:20.036454 kubelet[2291]: I0516 05:00:20.036262 2291 factory.go:221] Registration of the containerd container factory successfully May 16 05:00:20.036454 kubelet[2291]: I0516 05:00:20.036280 2291 factory.go:221] Registration of the systemd container factory successfully May 16 05:00:20.037672 kubelet[2291]: E0516 05:00:20.037654 2291 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 05:00:20.046372 kubelet[2291]: I0516 05:00:20.046356 2291 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 05:00:20.046517 kubelet[2291]: I0516 05:00:20.046507 2291 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 05:00:20.046573 kubelet[2291]: I0516 05:00:20.046565 2291 state_mem.go:36] "Initialized new in-memory state store" May 16 05:00:20.047318 kubelet[2291]: I0516 05:00:20.047288 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 05:00:20.048801 kubelet[2291]: I0516 05:00:20.048599 2291 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 05:00:20.048801 kubelet[2291]: I0516 05:00:20.048621 2291 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 05:00:20.048801 kubelet[2291]: I0516 05:00:20.048638 2291 kubelet.go:2321] "Starting kubelet main sync loop" May 16 05:00:20.048801 kubelet[2291]: E0516 05:00:20.048678 2291 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 05:00:20.049912 kubelet[2291]: W0516 05:00:20.049827 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 16 05:00:20.050050 kubelet[2291]: E0516 05:00:20.050026 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 16 05:00:20.134784 kubelet[2291]: E0516 05:00:20.134730 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:00:20.148983 kubelet[2291]: E0516 05:00:20.148946 2291 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 05:00:20.189294 kubelet[2291]: I0516 05:00:20.189205 2291 policy_none.go:49] "None policy: Start" May 16 05:00:20.190007 kubelet[2291]: I0516 05:00:20.189991 2291 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 05:00:20.190146 kubelet[2291]: I0516 05:00:20.190081 2291 state_mem.go:35] "Initializing new in-memory state store" May 16 05:00:20.196082 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 16 05:00:20.206908 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 16 05:00:20.209589 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 16 05:00:20.221045 kubelet[2291]: I0516 05:00:20.220973 2291 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 05:00:20.221291 kubelet[2291]: I0516 05:00:20.221163 2291 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 05:00:20.221291 kubelet[2291]: I0516 05:00:20.221176 2291 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 05:00:20.221456 kubelet[2291]: I0516 05:00:20.221440 2291 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 05:00:20.224389 kubelet[2291]: E0516 05:00:20.224354 2291 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 05:00:20.235444 kubelet[2291]: E0516 05:00:20.235399 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="400ms" May 16 05:00:20.322570 kubelet[2291]: I0516 05:00:20.322523 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 05:00:20.323023 kubelet[2291]: E0516 05:00:20.322996 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" May 16 05:00:20.357072 systemd[1]: Created slice kubepods-burstable-podf8668060a1218ee9e04dc146671495e7.slice - libcontainer container kubepods-burstable-podf8668060a1218ee9e04dc146671495e7.slice. May 16 05:00:20.372997 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 16 05:00:20.394429 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 16 05:00:20.524513 kubelet[2291]: I0516 05:00:20.524218 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 05:00:20.524724 kubelet[2291]: E0516 05:00:20.524696 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" May 16 05:00:20.533961 kubelet[2291]: I0516 05:00:20.533939 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:00:20.534042 kubelet[2291]: I0516 05:00:20.533974 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:00:20.534042 kubelet[2291]: I0516 05:00:20.534008 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8668060a1218ee9e04dc146671495e7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8668060a1218ee9e04dc146671495e7\") " pod="kube-system/kube-apiserver-localhost" May 16 05:00:20.534042 kubelet[2291]: I0516 05:00:20.534026 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8668060a1218ee9e04dc146671495e7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f8668060a1218ee9e04dc146671495e7\") " pod="kube-system/kube-apiserver-localhost" May 16 05:00:20.534042 kubelet[2291]: I0516 05:00:20.534041 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:00:20.534141 kubelet[2291]: I0516 05:00:20.534058 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:00:20.534141 kubelet[2291]: I0516 05:00:20.534080 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:00:20.534141 kubelet[2291]: I0516 05:00:20.534098 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 05:00:20.534141 kubelet[2291]: I0516 05:00:20.534114 2291 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8668060a1218ee9e04dc146671495e7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8668060a1218ee9e04dc146671495e7\") " pod="kube-system/kube-apiserver-localhost" May 16 05:00:20.636619 kubelet[2291]: E0516 05:00:20.636573 2291 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.27:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.27:6443: connect: connection refused" interval="800ms" May 16 05:00:20.671373 containerd[1519]: time="2025-05-16T05:00:20.671332404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f8668060a1218ee9e04dc146671495e7,Namespace:kube-system,Attempt:0,}" May 16 05:00:20.688161 containerd[1519]: time="2025-05-16T05:00:20.688097774Z" level=info msg="connecting to shim f3825d99995a0febc3cae0684ebf2c800a17e546a9e422f744b9450b8644ce6e" address="unix:///run/containerd/s/62b870381982f45e71065055bcc1668aed1851397eae1c0439004cd6805ddada" namespace=k8s.io protocol=ttrpc version=3 May 16 05:00:20.694676 containerd[1519]: time="2025-05-16T05:00:20.694633103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 16 05:00:20.697682 containerd[1519]: time="2025-05-16T05:00:20.697583107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 16 05:00:20.715526 systemd[1]: Started cri-containerd-f3825d99995a0febc3cae0684ebf2c800a17e546a9e422f744b9450b8644ce6e.scope - libcontainer container f3825d99995a0febc3cae0684ebf2c800a17e546a9e422f744b9450b8644ce6e. May 16 05:00:20.723958 containerd[1519]: time="2025-05-16T05:00:20.723902695Z" level=info msg="connecting to shim b0fdbdaad593e493fa9a1870591c5b065fef0e3a07e21e2f537505f4e7243fa8" address="unix:///run/containerd/s/8cb3bd465c0801af72f709de3f93d24e0e5c2f8a4480b1aa9342175e9a2c553a" namespace=k8s.io protocol=ttrpc version=3 May 16 05:00:20.725137 containerd[1519]: time="2025-05-16T05:00:20.725107212Z" level=info msg="connecting to shim b6c3ca72e56c40ef9e2427f6753a1cd41b68af2c3d13c84d2ffe53b5fbb17cd4" address="unix:///run/containerd/s/91e657b5537a8b398098fa04b51adc653e9ef37cecfeeb008b069395b06f41f2" namespace=k8s.io protocol=ttrpc version=3 May 16 05:00:20.751398 systemd[1]: Started cri-containerd-b0fdbdaad593e493fa9a1870591c5b065fef0e3a07e21e2f537505f4e7243fa8.scope - libcontainer container b0fdbdaad593e493fa9a1870591c5b065fef0e3a07e21e2f537505f4e7243fa8. May 16 05:00:20.753488 systemd[1]: Started cri-containerd-b6c3ca72e56c40ef9e2427f6753a1cd41b68af2c3d13c84d2ffe53b5fbb17cd4.scope - libcontainer container b6c3ca72e56c40ef9e2427f6753a1cd41b68af2c3d13c84d2ffe53b5fbb17cd4. May 16 05:00:20.762544 containerd[1519]: time="2025-05-16T05:00:20.762509534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f8668060a1218ee9e04dc146671495e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3825d99995a0febc3cae0684ebf2c800a17e546a9e422f744b9450b8644ce6e\"" May 16 05:00:20.766786 containerd[1519]: time="2025-05-16T05:00:20.765966883Z" level=info msg="CreateContainer within sandbox \"f3825d99995a0febc3cae0684ebf2c800a17e546a9e422f744b9450b8644ce6e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 05:00:20.775305 containerd[1519]: time="2025-05-16T05:00:20.775206539Z" level=info msg="Container 2131e6e78b937ef35a534a3d2a1386fb74e472d936cf1ea33a726362f7fb7128: CDI devices from CRI Config.CDIDevices: []" May 16 05:00:20.782639 containerd[1519]: time="2025-05-16T05:00:20.782599399Z" level=info msg="CreateContainer within sandbox \"f3825d99995a0febc3cae0684ebf2c800a17e546a9e422f744b9450b8644ce6e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2131e6e78b937ef35a534a3d2a1386fb74e472d936cf1ea33a726362f7fb7128\"" May 16 05:00:20.785292 containerd[1519]: time="2025-05-16T05:00:20.785265585Z" level=info msg="StartContainer for \"2131e6e78b937ef35a534a3d2a1386fb74e472d936cf1ea33a726362f7fb7128\"" May 16 05:00:20.788020 containerd[1519]: time="2025-05-16T05:00:20.787988862Z" level=info msg="connecting to shim 2131e6e78b937ef35a534a3d2a1386fb74e472d936cf1ea33a726362f7fb7128" address="unix:///run/containerd/s/62b870381982f45e71065055bcc1668aed1851397eae1c0439004cd6805ddada" protocol=ttrpc version=3 May 16 05:00:20.797909 containerd[1519]: time="2025-05-16T05:00:20.797869038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6c3ca72e56c40ef9e2427f6753a1cd41b68af2c3d13c84d2ffe53b5fbb17cd4\"" May 16 05:00:20.799017 containerd[1519]: time="2025-05-16T05:00:20.798972006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0fdbdaad593e493fa9a1870591c5b065fef0e3a07e21e2f537505f4e7243fa8\"" May 16 05:00:20.801688 containerd[1519]: time="2025-05-16T05:00:20.801649786Z" level=info msg="CreateContainer within sandbox \"b6c3ca72e56c40ef9e2427f6753a1cd41b68af2c3d13c84d2ffe53b5fbb17cd4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 05:00:20.802926 containerd[1519]: time="2025-05-16T05:00:20.802899320Z" level=info msg="CreateContainer within sandbox \"b0fdbdaad593e493fa9a1870591c5b065fef0e3a07e21e2f537505f4e7243fa8\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 05:00:20.814293 containerd[1519]: time="2025-05-16T05:00:20.814254797Z" level=info msg="Container fb4a164515fc6f7beb84ebf4b18124b73460f3097e1cdc7845b0252c674619f5: CDI devices from CRI Config.CDIDevices: []" May 16 05:00:20.814563 systemd[1]: Started cri-containerd-2131e6e78b937ef35a534a3d2a1386fb74e472d936cf1ea33a726362f7fb7128.scope - libcontainer container 2131e6e78b937ef35a534a3d2a1386fb74e472d936cf1ea33a726362f7fb7128. May 16 05:00:20.815865 containerd[1519]: time="2025-05-16T05:00:20.815836806Z" level=info msg="Container 8d9be83d1379fa8b8fea46f3306984755e3f3b2cde326db6492e796dcbc06301: CDI devices from CRI Config.CDIDevices: []" May 16 05:00:20.821190 containerd[1519]: time="2025-05-16T05:00:20.821159262Z" level=info msg="CreateContainer within sandbox \"b6c3ca72e56c40ef9e2427f6753a1cd41b68af2c3d13c84d2ffe53b5fbb17cd4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fb4a164515fc6f7beb84ebf4b18124b73460f3097e1cdc7845b0252c674619f5\"" May 16 05:00:20.821763 containerd[1519]: time="2025-05-16T05:00:20.821726018Z" level=info msg="StartContainer for \"fb4a164515fc6f7beb84ebf4b18124b73460f3097e1cdc7845b0252c674619f5\"" May 16 05:00:20.822748 containerd[1519]: time="2025-05-16T05:00:20.822724918Z" level=info msg="connecting to shim fb4a164515fc6f7beb84ebf4b18124b73460f3097e1cdc7845b0252c674619f5" address="unix:///run/containerd/s/91e657b5537a8b398098fa04b51adc653e9ef37cecfeeb008b069395b06f41f2" protocol=ttrpc version=3 May 16 05:00:20.825141 containerd[1519]: time="2025-05-16T05:00:20.825103168Z" level=info msg="CreateContainer within sandbox \"b0fdbdaad593e493fa9a1870591c5b065fef0e3a07e21e2f537505f4e7243fa8\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8d9be83d1379fa8b8fea46f3306984755e3f3b2cde326db6492e796dcbc06301\"" May 16 05:00:20.825667 containerd[1519]: time="2025-05-16T05:00:20.825607716Z" level=info msg="StartContainer for \"8d9be83d1379fa8b8fea46f3306984755e3f3b2cde326db6492e796dcbc06301\"" May 16 05:00:20.826655 containerd[1519]: time="2025-05-16T05:00:20.826623247Z" level=info msg="connecting to shim 8d9be83d1379fa8b8fea46f3306984755e3f3b2cde326db6492e796dcbc06301" address="unix:///run/containerd/s/8cb3bd465c0801af72f709de3f93d24e0e5c2f8a4480b1aa9342175e9a2c553a" protocol=ttrpc version=3 May 16 05:00:20.844421 systemd[1]: Started cri-containerd-fb4a164515fc6f7beb84ebf4b18124b73460f3097e1cdc7845b0252c674619f5.scope - libcontainer container fb4a164515fc6f7beb84ebf4b18124b73460f3097e1cdc7845b0252c674619f5. May 16 05:00:20.848294 systemd[1]: Started cri-containerd-8d9be83d1379fa8b8fea46f3306984755e3f3b2cde326db6492e796dcbc06301.scope - libcontainer container 8d9be83d1379fa8b8fea46f3306984755e3f3b2cde326db6492e796dcbc06301. May 16 05:00:20.860733 containerd[1519]: time="2025-05-16T05:00:20.860624551Z" level=info msg="StartContainer for \"2131e6e78b937ef35a534a3d2a1386fb74e472d936cf1ea33a726362f7fb7128\" returns successfully" May 16 05:00:20.883728 kubelet[2291]: W0516 05:00:20.883667 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 16 05:00:20.884158 kubelet[2291]: E0516 05:00:20.883740 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.27:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 16 05:00:20.885181 kubelet[2291]: W0516 05:00:20.885135 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 16 05:00:20.885259 kubelet[2291]: E0516 05:00:20.885185 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.27:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 16 05:00:20.912153 containerd[1519]: time="2025-05-16T05:00:20.907940112Z" level=info msg="StartContainer for \"fb4a164515fc6f7beb84ebf4b18124b73460f3097e1cdc7845b0252c674619f5\" returns successfully" May 16 05:00:20.915932 containerd[1519]: time="2025-05-16T05:00:20.913803617Z" level=info msg="StartContainer for \"8d9be83d1379fa8b8fea46f3306984755e3f3b2cde326db6492e796dcbc06301\" returns successfully" May 16 05:00:20.929406 kubelet[2291]: I0516 05:00:20.926744 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 05:00:20.929406 kubelet[2291]: E0516 05:00:20.927183 2291 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.27:6443/api/v1/nodes\": dial tcp 10.0.0.27:6443: connect: connection refused" node="localhost" May 16 05:00:20.981946 kubelet[2291]: W0516 05:00:20.981811 2291 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.27:6443: connect: connection refused May 16 05:00:20.981946 kubelet[2291]: E0516 05:00:20.981890 2291 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.27:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.27:6443: connect: connection refused" logger="UnhandledError" May 16 05:00:21.728553 kubelet[2291]: I0516 05:00:21.728525 2291 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 05:00:22.400403 kubelet[2291]: E0516 05:00:22.400356 2291 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 05:00:22.478030 kubelet[2291]: I0516 05:00:22.477975 2291 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 05:00:22.478030 kubelet[2291]: E0516 05:00:22.478017 2291 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 05:00:22.486928 kubelet[2291]: E0516 05:00:22.486882 2291 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:00:23.018357 kubelet[2291]: I0516 05:00:23.018127 2291 apiserver.go:52] "Watching apiserver" May 16 05:00:23.033469 kubelet[2291]: I0516 05:00:23.033425 2291 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 05:00:23.176727 kubelet[2291]: E0516 05:00:23.176531 2291 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 16 05:00:24.417485 systemd[1]: Reload requested from client PID 2576 ('systemctl') (unit session-7.scope)... May 16 05:00:24.417500 systemd[1]: Reloading... May 16 05:00:24.479265 zram_generator::config[2622]: No configuration found. May 16 05:00:24.550900 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 05:00:24.652910 systemd[1]: Reloading finished in 235 ms. May 16 05:00:24.679700 kubelet[2291]: I0516 05:00:24.679018 2291 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 05:00:24.679145 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:00:24.697197 systemd[1]: kubelet.service: Deactivated successfully. May 16 05:00:24.698199 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:00:24.698286 systemd[1]: kubelet.service: Consumed 1.457s CPU time, 127.5M memory peak. May 16 05:00:24.700013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 16 05:00:24.828652 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 16 05:00:24.832087 (kubelet)[2661]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 16 05:00:24.875838 kubelet[2661]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:00:24.877271 kubelet[2661]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 05:00:24.877271 kubelet[2661]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 05:00:24.877271 kubelet[2661]: I0516 05:00:24.876219 2661 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 05:00:24.883057 kubelet[2661]: I0516 05:00:24.883030 2661 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 05:00:24.883148 kubelet[2661]: I0516 05:00:24.883137 2661 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 05:00:24.883478 kubelet[2661]: I0516 05:00:24.883456 2661 server.go:934] "Client rotation is on, will bootstrap in background" May 16 05:00:24.884919 kubelet[2661]: I0516 05:00:24.884893 2661 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 05:00:24.886977 kubelet[2661]: I0516 05:00:24.886950 2661 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 05:00:24.890574 kubelet[2661]: I0516 05:00:24.890556 2661 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 16 05:00:24.893620 kubelet[2661]: I0516 05:00:24.893564 2661 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 05:00:24.893889 kubelet[2661]: I0516 05:00:24.893875 2661 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 05:00:24.894219 kubelet[2661]: I0516 05:00:24.894193 2661 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 05:00:24.894452 kubelet[2661]: I0516 05:00:24.894301 2661 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 05:00:24.894575 kubelet[2661]: I0516 05:00:24.894563 2661 topology_manager.go:138] "Creating topology manager with none policy" May 16 05:00:24.894626 kubelet[2661]: I0516 05:00:24.894619 2661 container_manager_linux.go:300] "Creating device plugin manager" May 16 05:00:24.894702 kubelet[2661]: I0516 05:00:24.894693 2661 state_mem.go:36] "Initialized new in-memory state store" May 16 05:00:24.894857 kubelet[2661]: I0516 05:00:24.894844 2661 kubelet.go:408] "Attempting to sync node with API server" May 16 05:00:24.894917 kubelet[2661]: I0516 05:00:24.894908 2661 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 05:00:24.894974 kubelet[2661]: I0516 05:00:24.894966 2661 kubelet.go:314] "Adding apiserver pod source" May 16 05:00:24.895035 kubelet[2661]: I0516 05:00:24.895025 2661 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 05:00:24.895974 kubelet[2661]: I0516 05:00:24.895956 2661 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 16 05:00:24.896540 kubelet[2661]: I0516 05:00:24.896520 2661 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 05:00:24.896986 kubelet[2661]: I0516 05:00:24.896965 2661 server.go:1274] "Started kubelet" May 16 05:00:24.898114 kubelet[2661]: I0516 05:00:24.898091 2661 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 05:00:24.899542 kubelet[2661]: I0516 05:00:24.899500 2661 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 05:00:24.903555 kubelet[2661]: I0516 05:00:24.903535 2661 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 05:00:24.903889 kubelet[2661]: E0516 05:00:24.903855 2661 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 05:00:24.905408 kubelet[2661]: I0516 05:00:24.905389 2661 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 05:00:24.905600 kubelet[2661]: I0516 05:00:24.905586 2661 reconciler.go:26] "Reconciler: start to sync state" May 16 05:00:24.906677 kubelet[2661]: I0516 05:00:24.906656 2661 server.go:449] "Adding debug handlers to kubelet server" May 16 05:00:24.907595 kubelet[2661]: I0516 05:00:24.907464 2661 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 05:00:24.908361 kubelet[2661]: I0516 05:00:24.908335 2661 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 05:00:24.908442 kubelet[2661]: I0516 05:00:24.908430 2661 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 05:00:24.908497 kubelet[2661]: I0516 05:00:24.908489 2661 kubelet.go:2321] "Starting kubelet main sync loop" May 16 05:00:24.908586 kubelet[2661]: I0516 05:00:24.908507 2661 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 05:00:24.908638 kubelet[2661]: E0516 05:00:24.908564 2661 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 05:00:24.909039 kubelet[2661]: I0516 05:00:24.909012 2661 factory.go:221] Registration of the systemd container factory successfully May 16 05:00:24.909180 kubelet[2661]: I0516 05:00:24.909138 2661 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 05:00:24.909417 kubelet[2661]: I0516 05:00:24.909386 2661 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 05:00:24.910027 kubelet[2661]: I0516 05:00:24.909561 2661 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 05:00:24.916852 kubelet[2661]: E0516 05:00:24.916832 2661 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 05:00:24.917576 kubelet[2661]: I0516 05:00:24.917557 2661 factory.go:221] Registration of the containerd container factory successfully May 16 05:00:24.955706 kubelet[2661]: I0516 05:00:24.955314 2661 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 05:00:24.955706 kubelet[2661]: I0516 05:00:24.955337 2661 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 05:00:24.955706 kubelet[2661]: I0516 05:00:24.955359 2661 state_mem.go:36] "Initialized new in-memory state store" May 16 05:00:24.955706 kubelet[2661]: I0516 05:00:24.955511 2661 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 05:00:24.955706 kubelet[2661]: I0516 05:00:24.955522 2661 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 05:00:24.955706 kubelet[2661]: I0516 05:00:24.955539 2661 policy_none.go:49] "None policy: Start" May 16 05:00:24.956995 kubelet[2661]: I0516 05:00:24.956257 2661 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 05:00:24.956995 kubelet[2661]: I0516 05:00:24.956279 2661 state_mem.go:35] "Initializing new in-memory state store" May 16 05:00:24.956995 kubelet[2661]: I0516 05:00:24.956418 2661 state_mem.go:75] "Updated machine memory state" May 16 05:00:24.961849 kubelet[2661]: I0516 05:00:24.961821 2661 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 05:00:24.962009 kubelet[2661]: I0516 05:00:24.961981 2661 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 05:00:24.962248 kubelet[2661]: I0516 05:00:24.961997 2661 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 05:00:24.962248 kubelet[2661]: I0516 05:00:24.962219 2661 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 05:00:25.016250 kubelet[2661]: E0516 05:00:25.016193 2661 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 16 05:00:25.063694 kubelet[2661]: I0516 05:00:25.063662 2661 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 05:00:25.070414 kubelet[2661]: I0516 05:00:25.070368 2661 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 16 05:00:25.070539 kubelet[2661]: I0516 05:00:25.070473 2661 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 05:00:25.206441 kubelet[2661]: I0516 05:00:25.206303 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f8668060a1218ee9e04dc146671495e7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f8668060a1218ee9e04dc146671495e7\") " pod="kube-system/kube-apiserver-localhost" May 16 05:00:25.206441 kubelet[2661]: I0516 05:00:25.206365 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 05:00:25.206441 kubelet[2661]: I0516 05:00:25.206431 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f8668060a1218ee9e04dc146671495e7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8668060a1218ee9e04dc146671495e7\") " pod="kube-system/kube-apiserver-localhost" May 16 05:00:25.206771 kubelet[2661]: I0516 05:00:25.206452 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f8668060a1218ee9e04dc146671495e7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f8668060a1218ee9e04dc146671495e7\") " pod="kube-system/kube-apiserver-localhost" May 16 05:00:25.206771 kubelet[2661]: I0516 05:00:25.206468 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:00:25.206771 kubelet[2661]: I0516 05:00:25.206528 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:00:25.206771 kubelet[2661]: I0516 05:00:25.206580 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:00:25.206771 kubelet[2661]: I0516 05:00:25.206600 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:00:25.206909 kubelet[2661]: I0516 05:00:25.206617 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 05:00:25.896093 kubelet[2661]: I0516 05:00:25.896036 2661 apiserver.go:52] "Watching apiserver" May 16 05:00:25.906117 kubelet[2661]: I0516 05:00:25.906076 2661 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 05:00:25.977385 kubelet[2661]: E0516 05:00:25.977276 2661 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 16 05:00:25.993010 kubelet[2661]: I0516 05:00:25.992844 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=0.992823812 podStartE2EDuration="992.823812ms" podCreationTimestamp="2025-05-16 05:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:00:25.977532554 +0000 UTC m=+1.141817502" watchObservedRunningTime="2025-05-16 05:00:25.992823812 +0000 UTC m=+1.157108760" May 16 05:00:25.993180 kubelet[2661]: I0516 05:00:25.993079 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9930713629999999 podStartE2EDuration="1.993071363s" podCreationTimestamp="2025-05-16 05:00:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:00:25.993059727 +0000 UTC m=+1.157344675" watchObservedRunningTime="2025-05-16 05:00:25.993071363 +0000 UTC m=+1.157356311" May 16 05:00:26.013776 kubelet[2661]: I0516 05:00:26.013710 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.013692812 podStartE2EDuration="1.013692812s" podCreationTimestamp="2025-05-16 05:00:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:00:26.001243681 +0000 UTC m=+1.165528629" watchObservedRunningTime="2025-05-16 05:00:26.013692812 +0000 UTC m=+1.177977760" May 16 05:00:28.656925 kubelet[2661]: I0516 05:00:28.656891 2661 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 05:00:28.657630 containerd[1519]: time="2025-05-16T05:00:28.657586630Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 05:00:28.657920 kubelet[2661]: I0516 05:00:28.657850 2661 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 05:00:29.615166 systemd[1]: Created slice kubepods-besteffort-podc1941448_0ab4_4261_a084_f4e2452effbe.slice - libcontainer container kubepods-besteffort-podc1941448_0ab4_4261_a084_f4e2452effbe.slice. May 16 05:00:29.634332 kubelet[2661]: I0516 05:00:29.634294 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c1941448-0ab4-4261-a084-f4e2452effbe-lib-modules\") pod \"kube-proxy-q2gg9\" (UID: \"c1941448-0ab4-4261-a084-f4e2452effbe\") " pod="kube-system/kube-proxy-q2gg9" May 16 05:00:29.634332 kubelet[2661]: I0516 05:00:29.634336 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9pd9\" (UniqueName: \"kubernetes.io/projected/c1941448-0ab4-4261-a084-f4e2452effbe-kube-api-access-f9pd9\") pod \"kube-proxy-q2gg9\" (UID: \"c1941448-0ab4-4261-a084-f4e2452effbe\") " pod="kube-system/kube-proxy-q2gg9" May 16 05:00:29.634498 kubelet[2661]: I0516 05:00:29.634366 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c1941448-0ab4-4261-a084-f4e2452effbe-kube-proxy\") pod \"kube-proxy-q2gg9\" (UID: \"c1941448-0ab4-4261-a084-f4e2452effbe\") " pod="kube-system/kube-proxy-q2gg9" May 16 05:00:29.634498 kubelet[2661]: I0516 05:00:29.634382 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c1941448-0ab4-4261-a084-f4e2452effbe-xtables-lock\") pod \"kube-proxy-q2gg9\" (UID: \"c1941448-0ab4-4261-a084-f4e2452effbe\") " pod="kube-system/kube-proxy-q2gg9" May 16 05:00:29.823994 systemd[1]: Created slice kubepods-besteffort-pode52183bc_8cad_48d2_9ec9_07c825c6eef1.slice - libcontainer container kubepods-besteffort-pode52183bc_8cad_48d2_9ec9_07c825c6eef1.slice. May 16 05:00:29.835911 kubelet[2661]: I0516 05:00:29.835848 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7nlc\" (UniqueName: \"kubernetes.io/projected/e52183bc-8cad-48d2-9ec9-07c825c6eef1-kube-api-access-h7nlc\") pod \"tigera-operator-7c5755cdcb-wtgrg\" (UID: \"e52183bc-8cad-48d2-9ec9-07c825c6eef1\") " pod="tigera-operator/tigera-operator-7c5755cdcb-wtgrg" May 16 05:00:29.835911 kubelet[2661]: I0516 05:00:29.835905 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e52183bc-8cad-48d2-9ec9-07c825c6eef1-var-lib-calico\") pod \"tigera-operator-7c5755cdcb-wtgrg\" (UID: \"e52183bc-8cad-48d2-9ec9-07c825c6eef1\") " pod="tigera-operator/tigera-operator-7c5755cdcb-wtgrg" May 16 05:00:29.926007 containerd[1519]: time="2025-05-16T05:00:29.925890955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q2gg9,Uid:c1941448-0ab4-4261-a084-f4e2452effbe,Namespace:kube-system,Attempt:0,}" May 16 05:00:29.950926 containerd[1519]: time="2025-05-16T05:00:29.950164481Z" level=info msg="connecting to shim ae7370065f13e09559021d5745f6b3bcecab9d0e3d3f5ae03dd6132f7ac33df0" address="unix:///run/containerd/s/3dd762d56a8b46547baf466e0f6032a56015976c800f3dc7d668dfd0e8c81fe3" namespace=k8s.io protocol=ttrpc version=3 May 16 05:00:29.974387 systemd[1]: Started cri-containerd-ae7370065f13e09559021d5745f6b3bcecab9d0e3d3f5ae03dd6132f7ac33df0.scope - libcontainer container ae7370065f13e09559021d5745f6b3bcecab9d0e3d3f5ae03dd6132f7ac33df0. May 16 05:00:29.995089 containerd[1519]: time="2025-05-16T05:00:29.995053516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-q2gg9,Uid:c1941448-0ab4-4261-a084-f4e2452effbe,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae7370065f13e09559021d5745f6b3bcecab9d0e3d3f5ae03dd6132f7ac33df0\"" May 16 05:00:29.997657 containerd[1519]: time="2025-05-16T05:00:29.997607881Z" level=info msg="CreateContainer within sandbox \"ae7370065f13e09559021d5745f6b3bcecab9d0e3d3f5ae03dd6132f7ac33df0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 05:00:30.007132 containerd[1519]: time="2025-05-16T05:00:30.006042085Z" level=info msg="Container 16310cc4a0e272e732198a14da5a2d7ed627605ac74cad3fe142846e31a8e48d: CDI devices from CRI Config.CDIDevices: []" May 16 05:00:30.013072 containerd[1519]: time="2025-05-16T05:00:30.013031531Z" level=info msg="CreateContainer within sandbox \"ae7370065f13e09559021d5745f6b3bcecab9d0e3d3f5ae03dd6132f7ac33df0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"16310cc4a0e272e732198a14da5a2d7ed627605ac74cad3fe142846e31a8e48d\"" May 16 05:00:30.013822 containerd[1519]: time="2025-05-16T05:00:30.013622336Z" level=info msg="StartContainer for \"16310cc4a0e272e732198a14da5a2d7ed627605ac74cad3fe142846e31a8e48d\"" May 16 05:00:30.015769 containerd[1519]: time="2025-05-16T05:00:30.015713707Z" level=info msg="connecting to shim 16310cc4a0e272e732198a14da5a2d7ed627605ac74cad3fe142846e31a8e48d" address="unix:///run/containerd/s/3dd762d56a8b46547baf466e0f6032a56015976c800f3dc7d668dfd0e8c81fe3" protocol=ttrpc version=3 May 16 05:00:30.040420 systemd[1]: Started cri-containerd-16310cc4a0e272e732198a14da5a2d7ed627605ac74cad3fe142846e31a8e48d.scope - libcontainer container 16310cc4a0e272e732198a14da5a2d7ed627605ac74cad3fe142846e31a8e48d. May 16 05:00:30.073579 containerd[1519]: time="2025-05-16T05:00:30.073540253Z" level=info msg="StartContainer for \"16310cc4a0e272e732198a14da5a2d7ed627605ac74cad3fe142846e31a8e48d\" returns successfully" May 16 05:00:30.128245 containerd[1519]: time="2025-05-16T05:00:30.128188113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-wtgrg,Uid:e52183bc-8cad-48d2-9ec9-07c825c6eef1,Namespace:tigera-operator,Attempt:0,}" May 16 05:00:30.149359 containerd[1519]: time="2025-05-16T05:00:30.149251065Z" level=info msg="connecting to shim 2b953925702409a2f9daefd8c044dde6caebc11aeaf2086a380caea0ae53b99a" address="unix:///run/containerd/s/6dab45b9d60de7ab08bd64d953affbfcd17c102a596d8330d39141747c03818f" namespace=k8s.io protocol=ttrpc version=3 May 16 05:00:30.169387 systemd[1]: Started cri-containerd-2b953925702409a2f9daefd8c044dde6caebc11aeaf2086a380caea0ae53b99a.scope - libcontainer container 2b953925702409a2f9daefd8c044dde6caebc11aeaf2086a380caea0ae53b99a. May 16 05:00:30.200376 containerd[1519]: time="2025-05-16T05:00:30.200271957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7c5755cdcb-wtgrg,Uid:e52183bc-8cad-48d2-9ec9-07c825c6eef1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2b953925702409a2f9daefd8c044dde6caebc11aeaf2086a380caea0ae53b99a\"" May 16 05:00:30.204819 containerd[1519]: time="2025-05-16T05:00:30.204709913Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 16 05:00:30.747863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057948614.mount: Deactivated successfully. May 16 05:00:30.957795 kubelet[2661]: I0516 05:00:30.957731 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-q2gg9" podStartSLOduration=1.9577155579999999 podStartE2EDuration="1.957715558s" podCreationTimestamp="2025-05-16 05:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:00:30.957017261 +0000 UTC m=+6.121302209" watchObservedRunningTime="2025-05-16 05:00:30.957715558 +0000 UTC m=+6.122000546" May 16 05:00:31.870140 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1293618580.mount: Deactivated successfully. May 16 05:00:33.082579 containerd[1519]: time="2025-05-16T05:00:33.082526683Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:33.083676 containerd[1519]: time="2025-05-16T05:00:33.083642122Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 16 05:00:33.085355 containerd[1519]: time="2025-05-16T05:00:33.085302443Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:33.087870 containerd[1519]: time="2025-05-16T05:00:33.087825137Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:33.088679 containerd[1519]: time="2025-05-16T05:00:33.088648359Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 2.883905095s" May 16 05:00:33.088738 containerd[1519]: time="2025-05-16T05:00:33.088678833Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 16 05:00:33.092329 containerd[1519]: time="2025-05-16T05:00:33.092221067Z" level=info msg="CreateContainer within sandbox \"2b953925702409a2f9daefd8c044dde6caebc11aeaf2086a380caea0ae53b99a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 16 05:00:33.098681 containerd[1519]: time="2025-05-16T05:00:33.098461317Z" level=info msg="Container 9c5039440261150ac59856f41cc3a38e77c0a68874ea147f6088b18e9762d122: CDI devices from CRI Config.CDIDevices: []" May 16 05:00:33.100991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769460197.mount: Deactivated successfully. May 16 05:00:33.103868 containerd[1519]: time="2025-05-16T05:00:33.103792845Z" level=info msg="CreateContainer within sandbox \"2b953925702409a2f9daefd8c044dde6caebc11aeaf2086a380caea0ae53b99a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9c5039440261150ac59856f41cc3a38e77c0a68874ea147f6088b18e9762d122\"" May 16 05:00:33.104335 containerd[1519]: time="2025-05-16T05:00:33.104298016Z" level=info msg="StartContainer for \"9c5039440261150ac59856f41cc3a38e77c0a68874ea147f6088b18e9762d122\"" May 16 05:00:33.105388 containerd[1519]: time="2025-05-16T05:00:33.105132515Z" level=info msg="connecting to shim 9c5039440261150ac59856f41cc3a38e77c0a68874ea147f6088b18e9762d122" address="unix:///run/containerd/s/6dab45b9d60de7ab08bd64d953affbfcd17c102a596d8330d39141747c03818f" protocol=ttrpc version=3 May 16 05:00:33.132371 systemd[1]: Started cri-containerd-9c5039440261150ac59856f41cc3a38e77c0a68874ea147f6088b18e9762d122.scope - libcontainer container 9c5039440261150ac59856f41cc3a38e77c0a68874ea147f6088b18e9762d122. May 16 05:00:33.156927 containerd[1519]: time="2025-05-16T05:00:33.156888205Z" level=info msg="StartContainer for \"9c5039440261150ac59856f41cc3a38e77c0a68874ea147f6088b18e9762d122\" returns successfully" May 16 05:00:33.962640 kubelet[2661]: I0516 05:00:33.962573 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7c5755cdcb-wtgrg" podStartSLOduration=2.073540942 podStartE2EDuration="4.962558973s" podCreationTimestamp="2025-05-16 05:00:29 +0000 UTC" firstStartedPulling="2025-05-16 05:00:30.202158342 +0000 UTC m=+5.366443290" lastFinishedPulling="2025-05-16 05:00:33.091176373 +0000 UTC m=+8.255461321" observedRunningTime="2025-05-16 05:00:33.962347099 +0000 UTC m=+9.126632047" watchObservedRunningTime="2025-05-16 05:00:33.962558973 +0000 UTC m=+9.126843881" May 16 05:00:37.456626 update_engine[1507]: I20250516 05:00:37.456570 1507 update_attempter.cc:509] Updating boot flags... May 16 05:00:38.381028 sudo[1731]: pam_unix(sudo:session): session closed for user root May 16 05:00:38.385274 sshd[1730]: Connection closed by 10.0.0.1 port 42342 May 16 05:00:38.385216 sshd-session[1728]: pam_unix(sshd:session): session closed for user core May 16 05:00:38.391293 systemd-logind[1505]: Session 7 logged out. Waiting for processes to exit. May 16 05:00:38.391671 systemd[1]: sshd@6-10.0.0.27:22-10.0.0.1:42342.service: Deactivated successfully. May 16 05:00:38.393950 systemd[1]: session-7.scope: Deactivated successfully. May 16 05:00:38.394180 systemd[1]: session-7.scope: Consumed 7.225s CPU time, 224.4M memory peak. May 16 05:00:38.396647 systemd-logind[1505]: Removed session 7. May 16 05:00:43.966225 systemd[1]: Created slice kubepods-besteffort-pode760c3ba_cc29_467f_aa3d_bf576a4ee99c.slice - libcontainer container kubepods-besteffort-pode760c3ba_cc29_467f_aa3d_bf576a4ee99c.slice. May 16 05:00:44.028091 kubelet[2661]: I0516 05:00:44.028048 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e760c3ba-cc29-467f-aa3d-bf576a4ee99c-tigera-ca-bundle\") pod \"calico-typha-57d4998764-8xwvz\" (UID: \"e760c3ba-cc29-467f-aa3d-bf576a4ee99c\") " pod="calico-system/calico-typha-57d4998764-8xwvz" May 16 05:00:44.028091 kubelet[2661]: I0516 05:00:44.028092 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wznq\" (UniqueName: \"kubernetes.io/projected/e760c3ba-cc29-467f-aa3d-bf576a4ee99c-kube-api-access-8wznq\") pod \"calico-typha-57d4998764-8xwvz\" (UID: \"e760c3ba-cc29-467f-aa3d-bf576a4ee99c\") " pod="calico-system/calico-typha-57d4998764-8xwvz" May 16 05:00:44.028548 kubelet[2661]: I0516 05:00:44.028111 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e760c3ba-cc29-467f-aa3d-bf576a4ee99c-typha-certs\") pod \"calico-typha-57d4998764-8xwvz\" (UID: \"e760c3ba-cc29-467f-aa3d-bf576a4ee99c\") " pod="calico-system/calico-typha-57d4998764-8xwvz" May 16 05:00:44.045070 systemd[1]: Created slice kubepods-besteffort-pod53ade2b1_1a98_4def_8141_6dd53a117013.slice - libcontainer container kubepods-besteffort-pod53ade2b1_1a98_4def_8141_6dd53a117013.slice. May 16 05:00:44.229695 kubelet[2661]: I0516 05:00:44.229570 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/53ade2b1-1a98-4def-8141-6dd53a117013-flexvol-driver-host\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.229695 kubelet[2661]: I0516 05:00:44.229612 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmhcm\" (UniqueName: \"kubernetes.io/projected/53ade2b1-1a98-4def-8141-6dd53a117013-kube-api-access-fmhcm\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.229695 kubelet[2661]: I0516 05:00:44.229633 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/53ade2b1-1a98-4def-8141-6dd53a117013-policysync\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.229695 kubelet[2661]: I0516 05:00:44.229654 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/53ade2b1-1a98-4def-8141-6dd53a117013-cni-bin-dir\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.229695 kubelet[2661]: I0516 05:00:44.229671 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/53ade2b1-1a98-4def-8141-6dd53a117013-cni-net-dir\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.230812 kubelet[2661]: I0516 05:00:44.229688 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53ade2b1-1a98-4def-8141-6dd53a117013-lib-modules\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.230812 kubelet[2661]: I0516 05:00:44.229706 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/53ade2b1-1a98-4def-8141-6dd53a117013-var-run-calico\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.230812 kubelet[2661]: I0516 05:00:44.229722 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53ade2b1-1a98-4def-8141-6dd53a117013-xtables-lock\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.230812 kubelet[2661]: I0516 05:00:44.229739 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/53ade2b1-1a98-4def-8141-6dd53a117013-cni-log-dir\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.230812 kubelet[2661]: I0516 05:00:44.229755 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53ade2b1-1a98-4def-8141-6dd53a117013-tigera-ca-bundle\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.230973 kubelet[2661]: I0516 05:00:44.229769 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/53ade2b1-1a98-4def-8141-6dd53a117013-var-lib-calico\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.230973 kubelet[2661]: I0516 05:00:44.229783 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/53ade2b1-1a98-4def-8141-6dd53a117013-node-certs\") pod \"calico-node-5t5jx\" (UID: \"53ade2b1-1a98-4def-8141-6dd53a117013\") " pod="calico-system/calico-node-5t5jx" May 16 05:00:44.230973 kubelet[2661]: E0516 05:00:44.230147 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5jxv4" podUID="e6af650f-1da2-49ae-aae7-f7207bc906df" May 16 05:00:44.272563 containerd[1519]: time="2025-05-16T05:00:44.272497921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57d4998764-8xwvz,Uid:e760c3ba-cc29-467f-aa3d-bf576a4ee99c,Namespace:calico-system,Attempt:0,}" May 16 05:00:44.306063 containerd[1519]: time="2025-05-16T05:00:44.306005920Z" level=info msg="connecting to shim af4239a3112c513be02394c18058dd7da1938550b3490d72ce7d4798737e5230" address="unix:///run/containerd/s/005e8a7b2ba0006873cde345ccd83b04b9ad236104ff88ac2c33df7bbc21df6a" namespace=k8s.io protocol=ttrpc version=3 May 16 05:00:44.330386 kubelet[2661]: I0516 05:00:44.330153 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/e6af650f-1da2-49ae-aae7-f7207bc906df-registration-dir\") pod \"csi-node-driver-5jxv4\" (UID: \"e6af650f-1da2-49ae-aae7-f7207bc906df\") " pod="calico-system/csi-node-driver-5jxv4" May 16 05:00:44.331258 kubelet[2661]: I0516 05:00:44.331189 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/e6af650f-1da2-49ae-aae7-f7207bc906df-kubelet-dir\") pod \"csi-node-driver-5jxv4\" (UID: \"e6af650f-1da2-49ae-aae7-f7207bc906df\") " pod="calico-system/csi-node-driver-5jxv4" May 16 05:00:44.333292 kubelet[2661]: I0516 05:00:44.332357 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/e6af650f-1da2-49ae-aae7-f7207bc906df-varrun\") pod \"csi-node-driver-5jxv4\" (UID: \"e6af650f-1da2-49ae-aae7-f7207bc906df\") " pod="calico-system/csi-node-driver-5jxv4" May 16 05:00:44.333292 kubelet[2661]: I0516 05:00:44.332503 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hlln\" (UniqueName: \"kubernetes.io/projected/e6af650f-1da2-49ae-aae7-f7207bc906df-kube-api-access-9hlln\") pod \"csi-node-driver-5jxv4\" (UID: \"e6af650f-1da2-49ae-aae7-f7207bc906df\") " pod="calico-system/csi-node-driver-5jxv4" May 16 05:00:44.333292 kubelet[2661]: I0516 05:00:44.332614 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/e6af650f-1da2-49ae-aae7-f7207bc906df-socket-dir\") pod \"csi-node-driver-5jxv4\" (UID: \"e6af650f-1da2-49ae-aae7-f7207bc906df\") " pod="calico-system/csi-node-driver-5jxv4" May 16 05:00:44.348796 kubelet[2661]: E0516 05:00:44.348766 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.348952 kubelet[2661]: W0516 05:00:44.348934 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.349040 kubelet[2661]: E0516 05:00:44.349020 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.349518 kubelet[2661]: E0516 05:00:44.349462 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.349518 kubelet[2661]: W0516 05:00:44.349482 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.349518 kubelet[2661]: E0516 05:00:44.349498 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.350929 kubelet[2661]: E0516 05:00:44.350835 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.350929 kubelet[2661]: W0516 05:00:44.350869 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.350929 kubelet[2661]: E0516 05:00:44.350891 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.388391 systemd[1]: Started cri-containerd-af4239a3112c513be02394c18058dd7da1938550b3490d72ce7d4798737e5230.scope - libcontainer container af4239a3112c513be02394c18058dd7da1938550b3490d72ce7d4798737e5230. May 16 05:00:44.425177 containerd[1519]: time="2025-05-16T05:00:44.425112179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57d4998764-8xwvz,Uid:e760c3ba-cc29-467f-aa3d-bf576a4ee99c,Namespace:calico-system,Attempt:0,} returns sandbox id \"af4239a3112c513be02394c18058dd7da1938550b3490d72ce7d4798737e5230\"" May 16 05:00:44.431441 containerd[1519]: time="2025-05-16T05:00:44.431404550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 16 05:00:44.433737 kubelet[2661]: E0516 05:00:44.433714 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.433737 kubelet[2661]: W0516 05:00:44.433734 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.433933 kubelet[2661]: E0516 05:00:44.433753 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.433933 kubelet[2661]: E0516 05:00:44.433918 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.433933 kubelet[2661]: W0516 05:00:44.433927 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.434020 kubelet[2661]: E0516 05:00:44.433937 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.434146 kubelet[2661]: E0516 05:00:44.434131 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.434146 kubelet[2661]: W0516 05:00:44.434142 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.434264 kubelet[2661]: E0516 05:00:44.434156 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.434366 kubelet[2661]: E0516 05:00:44.434350 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.434366 kubelet[2661]: W0516 05:00:44.434366 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.434472 kubelet[2661]: E0516 05:00:44.434386 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.434523 kubelet[2661]: E0516 05:00:44.434512 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.434523 kubelet[2661]: W0516 05:00:44.434522 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.434600 kubelet[2661]: E0516 05:00:44.434534 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.434662 kubelet[2661]: E0516 05:00:44.434650 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.434662 kubelet[2661]: W0516 05:00:44.434660 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.434727 kubelet[2661]: E0516 05:00:44.434673 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.434862 kubelet[2661]: E0516 05:00:44.434851 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.434862 kubelet[2661]: W0516 05:00:44.434861 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.434951 kubelet[2661]: E0516 05:00:44.434873 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.435038 kubelet[2661]: E0516 05:00:44.435026 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.435038 kubelet[2661]: W0516 05:00:44.435037 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.435130 kubelet[2661]: E0516 05:00:44.435066 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.435212 kubelet[2661]: E0516 05:00:44.435200 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.435255 kubelet[2661]: W0516 05:00:44.435211 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.435255 kubelet[2661]: E0516 05:00:44.435242 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.435391 kubelet[2661]: E0516 05:00:44.435380 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.435391 kubelet[2661]: W0516 05:00:44.435390 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.435391 kubelet[2661]: E0516 05:00:44.435423 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.435560 kubelet[2661]: E0516 05:00:44.435533 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.435560 kubelet[2661]: W0516 05:00:44.435540 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.435610 kubelet[2661]: E0516 05:00:44.435568 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.435695 kubelet[2661]: E0516 05:00:44.435685 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.435695 kubelet[2661]: W0516 05:00:44.435695 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.435811 kubelet[2661]: E0516 05:00:44.435709 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.435811 kubelet[2661]: E0516 05:00:44.435846 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.435811 kubelet[2661]: W0516 05:00:44.435853 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.435811 kubelet[2661]: E0516 05:00:44.435860 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.436115 kubelet[2661]: E0516 05:00:44.436005 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.436115 kubelet[2661]: W0516 05:00:44.436012 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.436115 kubelet[2661]: E0516 05:00:44.436021 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.436280 kubelet[2661]: E0516 05:00:44.436267 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.436280 kubelet[2661]: W0516 05:00:44.436279 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.436365 kubelet[2661]: E0516 05:00:44.436308 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.436488 kubelet[2661]: E0516 05:00:44.436474 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.436523 kubelet[2661]: W0516 05:00:44.436485 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.436548 kubelet[2661]: E0516 05:00:44.436521 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.436702 kubelet[2661]: E0516 05:00:44.436691 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.436744 kubelet[2661]: W0516 05:00:44.436703 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.436744 kubelet[2661]: E0516 05:00:44.436716 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.436947 kubelet[2661]: E0516 05:00:44.436933 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.436947 kubelet[2661]: W0516 05:00:44.436945 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.437067 kubelet[2661]: E0516 05:00:44.437048 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.437201 kubelet[2661]: E0516 05:00:44.437188 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.437201 kubelet[2661]: W0516 05:00:44.437201 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.437395 kubelet[2661]: E0516 05:00:44.437360 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.437531 kubelet[2661]: E0516 05:00:44.437520 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.437562 kubelet[2661]: W0516 05:00:44.437531 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.437562 kubelet[2661]: E0516 05:00:44.437545 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.437711 kubelet[2661]: E0516 05:00:44.437672 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.437711 kubelet[2661]: W0516 05:00:44.437679 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.437711 kubelet[2661]: E0516 05:00:44.437692 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.437846 kubelet[2661]: E0516 05:00:44.437836 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.437879 kubelet[2661]: W0516 05:00:44.437851 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.437879 kubelet[2661]: E0516 05:00:44.437868 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.438028 kubelet[2661]: E0516 05:00:44.438015 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.438028 kubelet[2661]: W0516 05:00:44.438025 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.438071 kubelet[2661]: E0516 05:00:44.438034 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.438165 kubelet[2661]: E0516 05:00:44.438155 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.438165 kubelet[2661]: W0516 05:00:44.438165 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.438211 kubelet[2661]: E0516 05:00:44.438173 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.438373 kubelet[2661]: E0516 05:00:44.438361 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.438407 kubelet[2661]: W0516 05:00:44.438373 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.438407 kubelet[2661]: E0516 05:00:44.438382 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.448468 kubelet[2661]: E0516 05:00:44.448446 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:44.448468 kubelet[2661]: W0516 05:00:44.448465 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:44.448578 kubelet[2661]: E0516 05:00:44.448482 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:44.650393 containerd[1519]: time="2025-05-16T05:00:44.650212373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5t5jx,Uid:53ade2b1-1a98-4def-8141-6dd53a117013,Namespace:calico-system,Attempt:0,}" May 16 05:00:44.669239 containerd[1519]: time="2025-05-16T05:00:44.669082687Z" level=info msg="connecting to shim bc426f9206a136af867225de8eba0244ec04224c80bf8ed9c43ef49c3c097f0b" address="unix:///run/containerd/s/1ac5b8dc61e2ea9c927782247be973896a670e927a827f2f79d8d37288c94903" namespace=k8s.io protocol=ttrpc version=3 May 16 05:00:44.725555 systemd[1]: Started cri-containerd-bc426f9206a136af867225de8eba0244ec04224c80bf8ed9c43ef49c3c097f0b.scope - libcontainer container bc426f9206a136af867225de8eba0244ec04224c80bf8ed9c43ef49c3c097f0b. May 16 05:00:44.748806 containerd[1519]: time="2025-05-16T05:00:44.748759858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5t5jx,Uid:53ade2b1-1a98-4def-8141-6dd53a117013,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc426f9206a136af867225de8eba0244ec04224c80bf8ed9c43ef49c3c097f0b\"" May 16 05:00:45.413359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3607808310.mount: Deactivated successfully. May 16 05:00:45.909686 kubelet[2661]: E0516 05:00:45.909047 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5jxv4" podUID="e6af650f-1da2-49ae-aae7-f7207bc906df" May 16 05:00:46.276845 containerd[1519]: time="2025-05-16T05:00:46.276776470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:46.289900 containerd[1519]: time="2025-05-16T05:00:46.289820132Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 16 05:00:46.303671 containerd[1519]: time="2025-05-16T05:00:46.303614323Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:46.313301 containerd[1519]: time="2025-05-16T05:00:46.313250063Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:46.314826 containerd[1519]: time="2025-05-16T05:00:46.314685169Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 1.883242702s" May 16 05:00:46.314826 containerd[1519]: time="2025-05-16T05:00:46.314721485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 16 05:00:46.316357 containerd[1519]: time="2025-05-16T05:00:46.316166350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 16 05:00:46.340784 containerd[1519]: time="2025-05-16T05:00:46.340729616Z" level=info msg="CreateContainer within sandbox \"af4239a3112c513be02394c18058dd7da1938550b3490d72ce7d4798737e5230\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 16 05:00:46.356569 containerd[1519]: time="2025-05-16T05:00:46.356520620Z" level=info msg="Container 68ee31c73f53d571fce91a3581e9367586e1c52b0bdaeb31d0077e8e9c678eb3: CDI devices from CRI Config.CDIDevices: []" May 16 05:00:46.379090 containerd[1519]: time="2025-05-16T05:00:46.378145600Z" level=info msg="CreateContainer within sandbox \"af4239a3112c513be02394c18058dd7da1938550b3490d72ce7d4798737e5230\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"68ee31c73f53d571fce91a3581e9367586e1c52b0bdaeb31d0077e8e9c678eb3\"" May 16 05:00:46.395001 containerd[1519]: time="2025-05-16T05:00:46.394767127Z" level=info msg="StartContainer for \"68ee31c73f53d571fce91a3581e9367586e1c52b0bdaeb31d0077e8e9c678eb3\"" May 16 05:00:46.396445 containerd[1519]: time="2025-05-16T05:00:46.396418893Z" level=info msg="connecting to shim 68ee31c73f53d571fce91a3581e9367586e1c52b0bdaeb31d0077e8e9c678eb3" address="unix:///run/containerd/s/005e8a7b2ba0006873cde345ccd83b04b9ad236104ff88ac2c33df7bbc21df6a" protocol=ttrpc version=3 May 16 05:00:46.420431 systemd[1]: Started cri-containerd-68ee31c73f53d571fce91a3581e9367586e1c52b0bdaeb31d0077e8e9c678eb3.scope - libcontainer container 68ee31c73f53d571fce91a3581e9367586e1c52b0bdaeb31d0077e8e9c678eb3. May 16 05:00:46.467319 containerd[1519]: time="2025-05-16T05:00:46.467267594Z" level=info msg="StartContainer for \"68ee31c73f53d571fce91a3581e9367586e1c52b0bdaeb31d0077e8e9c678eb3\" returns successfully" May 16 05:00:46.997392 kubelet[2661]: I0516 05:00:46.997313 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57d4998764-8xwvz" podStartSLOduration=2.11051542 podStartE2EDuration="3.997296599s" podCreationTimestamp="2025-05-16 05:00:43 +0000 UTC" firstStartedPulling="2025-05-16 05:00:44.428857541 +0000 UTC m=+19.593142489" lastFinishedPulling="2025-05-16 05:00:46.31563872 +0000 UTC m=+21.479923668" observedRunningTime="2025-05-16 05:00:46.996773087 +0000 UTC m=+22.161058035" watchObservedRunningTime="2025-05-16 05:00:46.997296599 +0000 UTC m=+22.161581547" May 16 05:00:47.053670 kubelet[2661]: E0516 05:00:47.053631 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.053670 kubelet[2661]: W0516 05:00:47.053653 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.053670 kubelet[2661]: E0516 05:00:47.053672 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.053856 kubelet[2661]: E0516 05:00:47.053841 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.053856 kubelet[2661]: W0516 05:00:47.053855 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.053916 kubelet[2661]: E0516 05:00:47.053865 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.054190 kubelet[2661]: E0516 05:00:47.054174 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.054224 kubelet[2661]: W0516 05:00:47.054190 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.054224 kubelet[2661]: E0516 05:00:47.054203 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.054470 kubelet[2661]: E0516 05:00:47.054453 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.054497 kubelet[2661]: W0516 05:00:47.054471 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.054497 kubelet[2661]: E0516 05:00:47.054482 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.054690 kubelet[2661]: E0516 05:00:47.054676 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.054716 kubelet[2661]: W0516 05:00:47.054689 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.054716 kubelet[2661]: E0516 05:00:47.054698 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.054846 kubelet[2661]: E0516 05:00:47.054834 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.054870 kubelet[2661]: W0516 05:00:47.054846 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.054870 kubelet[2661]: E0516 05:00:47.054855 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.055147 kubelet[2661]: E0516 05:00:47.055129 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.055147 kubelet[2661]: W0516 05:00:47.055144 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.055203 kubelet[2661]: E0516 05:00:47.055154 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.055319 kubelet[2661]: E0516 05:00:47.055307 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.055342 kubelet[2661]: W0516 05:00:47.055318 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.055342 kubelet[2661]: E0516 05:00:47.055327 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.055511 kubelet[2661]: E0516 05:00:47.055497 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.055511 kubelet[2661]: W0516 05:00:47.055509 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.055561 kubelet[2661]: E0516 05:00:47.055518 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.055970 kubelet[2661]: E0516 05:00:47.055735 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.055970 kubelet[2661]: W0516 05:00:47.055750 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.055970 kubelet[2661]: E0516 05:00:47.055777 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.055970 kubelet[2661]: E0516 05:00:47.055948 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.055970 kubelet[2661]: W0516 05:00:47.055956 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.055970 kubelet[2661]: E0516 05:00:47.055965 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.056374 kubelet[2661]: E0516 05:00:47.056355 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.056409 kubelet[2661]: W0516 05:00:47.056375 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.056409 kubelet[2661]: E0516 05:00:47.056386 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.056981 kubelet[2661]: E0516 05:00:47.056957 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.056981 kubelet[2661]: W0516 05:00:47.056973 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.056981 kubelet[2661]: E0516 05:00:47.056984 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.057163 kubelet[2661]: E0516 05:00:47.057148 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.057163 kubelet[2661]: W0516 05:00:47.057160 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.057212 kubelet[2661]: E0516 05:00:47.057169 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.057339 kubelet[2661]: E0516 05:00:47.057318 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.057339 kubelet[2661]: W0516 05:00:47.057331 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.057339 kubelet[2661]: E0516 05:00:47.057339 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.057698 kubelet[2661]: E0516 05:00:47.057676 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.057698 kubelet[2661]: W0516 05:00:47.057693 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.057783 kubelet[2661]: E0516 05:00:47.057704 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.058322 kubelet[2661]: E0516 05:00:47.058299 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.058322 kubelet[2661]: W0516 05:00:47.058319 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.058412 kubelet[2661]: E0516 05:00:47.058336 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.058737 kubelet[2661]: E0516 05:00:47.058719 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.058737 kubelet[2661]: W0516 05:00:47.058736 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.058794 kubelet[2661]: E0516 05:00:47.058752 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.058965 kubelet[2661]: E0516 05:00:47.058950 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.058995 kubelet[2661]: W0516 05:00:47.058964 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.059131 kubelet[2661]: E0516 05:00:47.059043 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.059201 kubelet[2661]: E0516 05:00:47.059190 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.059225 kubelet[2661]: W0516 05:00:47.059201 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.059268 kubelet[2661]: E0516 05:00:47.059258 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.059389 kubelet[2661]: E0516 05:00:47.059377 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.059416 kubelet[2661]: W0516 05:00:47.059388 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.059416 kubelet[2661]: E0516 05:00:47.059403 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.059702 kubelet[2661]: E0516 05:00:47.059687 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.059726 kubelet[2661]: W0516 05:00:47.059701 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.059940 kubelet[2661]: E0516 05:00:47.059808 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.060031 kubelet[2661]: E0516 05:00:47.060008 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.060031 kubelet[2661]: W0516 05:00:47.060022 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.060154 kubelet[2661]: E0516 05:00:47.060134 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.060215 kubelet[2661]: E0516 05:00:47.060194 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.060215 kubelet[2661]: W0516 05:00:47.060210 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.060289 kubelet[2661]: E0516 05:00:47.060240 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.060449 kubelet[2661]: E0516 05:00:47.060429 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.060449 kubelet[2661]: W0516 05:00:47.060442 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.060507 kubelet[2661]: E0516 05:00:47.060479 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.060683 kubelet[2661]: E0516 05:00:47.060665 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.060683 kubelet[2661]: W0516 05:00:47.060678 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.060762 kubelet[2661]: E0516 05:00:47.060750 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.060936 kubelet[2661]: E0516 05:00:47.060924 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.060958 kubelet[2661]: W0516 05:00:47.060937 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.060958 kubelet[2661]: E0516 05:00:47.060949 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.061499 kubelet[2661]: E0516 05:00:47.061480 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.061579 kubelet[2661]: W0516 05:00:47.061563 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.061602 kubelet[2661]: E0516 05:00:47.061586 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.062301 kubelet[2661]: E0516 05:00:47.062221 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.062301 kubelet[2661]: W0516 05:00:47.062291 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.062462 kubelet[2661]: E0516 05:00:47.062429 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.062649 kubelet[2661]: E0516 05:00:47.062615 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.062649 kubelet[2661]: W0516 05:00:47.062638 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.062705 kubelet[2661]: E0516 05:00:47.062698 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.063000 kubelet[2661]: E0516 05:00:47.062960 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.063000 kubelet[2661]: W0516 05:00:47.062980 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.063192 kubelet[2661]: E0516 05:00:47.063161 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.064198 kubelet[2661]: E0516 05:00:47.064077 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.064240 kubelet[2661]: W0516 05:00:47.064196 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.064240 kubelet[2661]: E0516 05:00:47.064215 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.064871 kubelet[2661]: E0516 05:00:47.064842 2661 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 16 05:00:47.064907 kubelet[2661]: W0516 05:00:47.064861 2661 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 16 05:00:47.065373 kubelet[2661]: E0516 05:00:47.064933 2661 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 16 05:00:47.571564 containerd[1519]: time="2025-05-16T05:00:47.571427372Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:47.572528 containerd[1519]: time="2025-05-16T05:00:47.572341052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 16 05:00:47.573483 containerd[1519]: time="2025-05-16T05:00:47.573445275Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:47.575128 containerd[1519]: time="2025-05-16T05:00:47.575098251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:47.576434 containerd[1519]: time="2025-05-16T05:00:47.576314344Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 1.260114917s" May 16 05:00:47.576434 containerd[1519]: time="2025-05-16T05:00:47.576345182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 16 05:00:47.579191 containerd[1519]: time="2025-05-16T05:00:47.579162775Z" level=info msg="CreateContainer within sandbox \"bc426f9206a136af867225de8eba0244ec04224c80bf8ed9c43ef49c3c097f0b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 16 05:00:47.584961 containerd[1519]: time="2025-05-16T05:00:47.584671892Z" level=info msg="Container fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3: CDI devices from CRI Config.CDIDevices: []" May 16 05:00:47.601880 containerd[1519]: time="2025-05-16T05:00:47.601832509Z" level=info msg="CreateContainer within sandbox \"bc426f9206a136af867225de8eba0244ec04224c80bf8ed9c43ef49c3c097f0b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3\"" May 16 05:00:47.602438 containerd[1519]: time="2025-05-16T05:00:47.602382181Z" level=info msg="StartContainer for \"fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3\"" May 16 05:00:47.604133 containerd[1519]: time="2025-05-16T05:00:47.604101231Z" level=info msg="connecting to shim fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3" address="unix:///run/containerd/s/1ac5b8dc61e2ea9c927782247be973896a670e927a827f2f79d8d37288c94903" protocol=ttrpc version=3 May 16 05:00:47.626408 systemd[1]: Started cri-containerd-fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3.scope - libcontainer container fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3. May 16 05:00:47.674173 containerd[1519]: time="2025-05-16T05:00:47.672783135Z" level=info msg="StartContainer for \"fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3\" returns successfully" May 16 05:00:47.703931 systemd[1]: cri-containerd-fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3.scope: Deactivated successfully. May 16 05:00:47.730773 containerd[1519]: time="2025-05-16T05:00:47.730721701Z" level=info msg="received exit event container_id:\"fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3\" id:\"fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3\" pid:3318 exited_at:{seconds:1747371647 nanos:717169088}" May 16 05:00:47.737583 containerd[1519]: time="2025-05-16T05:00:47.737513266Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3\" id:\"fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3\" pid:3318 exited_at:{seconds:1747371647 nanos:717169088}" May 16 05:00:47.762398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fda52befb126c9e9ed0ab3ceb62c3a5d3f09e2ab56f927ccfa83dca531d65dd3-rootfs.mount: Deactivated successfully. May 16 05:00:47.909414 kubelet[2661]: E0516 05:00:47.909004 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5jxv4" podUID="e6af650f-1da2-49ae-aae7-f7207bc906df" May 16 05:00:47.990243 kubelet[2661]: I0516 05:00:47.990201 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 05:00:47.991518 containerd[1519]: time="2025-05-16T05:00:47.991423548Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 16 05:00:49.909651 kubelet[2661]: E0516 05:00:49.909598 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5jxv4" podUID="e6af650f-1da2-49ae-aae7-f7207bc906df" May 16 05:00:51.675998 containerd[1519]: time="2025-05-16T05:00:51.675951436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:51.676474 containerd[1519]: time="2025-05-16T05:00:51.676443642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 16 05:00:51.677117 containerd[1519]: time="2025-05-16T05:00:51.677095398Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:51.678883 containerd[1519]: time="2025-05-16T05:00:51.678847720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:51.680064 containerd[1519]: time="2025-05-16T05:00:51.679975643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 3.688490181s" May 16 05:00:51.680064 containerd[1519]: time="2025-05-16T05:00:51.680009241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 16 05:00:51.682289 containerd[1519]: time="2025-05-16T05:00:51.681955389Z" level=info msg="CreateContainer within sandbox \"bc426f9206a136af867225de8eba0244ec04224c80bf8ed9c43ef49c3c097f0b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 16 05:00:51.690181 containerd[1519]: time="2025-05-16T05:00:51.690138636Z" level=info msg="Container 82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb: CDI devices from CRI Config.CDIDevices: []" May 16 05:00:51.699007 containerd[1519]: time="2025-05-16T05:00:51.698956959Z" level=info msg="CreateContainer within sandbox \"bc426f9206a136af867225de8eba0244ec04224c80bf8ed9c43ef49c3c097f0b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb\"" May 16 05:00:51.699532 containerd[1519]: time="2025-05-16T05:00:51.699458445Z" level=info msg="StartContainer for \"82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb\"" May 16 05:00:51.700876 containerd[1519]: time="2025-05-16T05:00:51.700851351Z" level=info msg="connecting to shim 82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb" address="unix:///run/containerd/s/1ac5b8dc61e2ea9c927782247be973896a670e927a827f2f79d8d37288c94903" protocol=ttrpc version=3 May 16 05:00:51.720387 systemd[1]: Started cri-containerd-82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb.scope - libcontainer container 82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb. May 16 05:00:51.754858 containerd[1519]: time="2025-05-16T05:00:51.754820460Z" level=info msg="StartContainer for \"82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb\" returns successfully" May 16 05:00:51.908995 kubelet[2661]: E0516 05:00:51.908894 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5jxv4" podUID="e6af650f-1da2-49ae-aae7-f7207bc906df" May 16 05:00:52.252815 systemd[1]: cri-containerd-82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb.scope: Deactivated successfully. May 16 05:00:52.253129 systemd[1]: cri-containerd-82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb.scope: Consumed 438ms CPU time, 177.5M memory peak, 3.4M read from disk, 165.5M written to disk. May 16 05:00:52.254396 containerd[1519]: time="2025-05-16T05:00:52.254339060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb\" id:\"82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb\" pid:3381 exited_at:{seconds:1747371652 nanos:253925366}" May 16 05:00:52.263554 containerd[1519]: time="2025-05-16T05:00:52.263484000Z" level=info msg="received exit event container_id:\"82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb\" id:\"82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb\" pid:3381 exited_at:{seconds:1747371652 nanos:253925366}" May 16 05:00:52.282874 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82956a05548af1ab6b08f647fa6e5d12b12a12905d38e0252368c0a358b559eb-rootfs.mount: Deactivated successfully. May 16 05:00:52.373266 kubelet[2661]: I0516 05:00:52.373210 2661 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 05:00:52.465370 systemd[1]: Created slice kubepods-burstable-pod704fe550_765d_43cc_8dcb_1eabf4abbe0c.slice - libcontainer container kubepods-burstable-pod704fe550_765d_43cc_8dcb_1eabf4abbe0c.slice. May 16 05:00:52.472086 systemd[1]: Created slice kubepods-burstable-podeafc01ac_e2de_4f92_86b0_314fd75a243e.slice - libcontainer container kubepods-burstable-podeafc01ac_e2de_4f92_86b0_314fd75a243e.slice. May 16 05:00:52.477507 systemd[1]: Created slice kubepods-besteffort-pod103355f8_b44d_4322_8416_0a85611bc48f.slice - libcontainer container kubepods-besteffort-pod103355f8_b44d_4322_8416_0a85611bc48f.slice. May 16 05:00:52.482837 systemd[1]: Created slice kubepods-besteffort-pod6f7a9ab8_1325_4d5f_862e_18057d14ccb1.slice - libcontainer container kubepods-besteffort-pod6f7a9ab8_1325_4d5f_862e_18057d14ccb1.slice. May 16 05:00:52.495140 systemd[1]: Created slice kubepods-besteffort-pod3a8dd0e7_85f8_4a75_8ec5_c5d246bfec73.slice - libcontainer container kubepods-besteffort-pod3a8dd0e7_85f8_4a75_8ec5_c5d246bfec73.slice. May 16 05:00:52.496098 kubelet[2661]: I0516 05:00:52.495420 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc024b19-433a-4c77-9817-d8df23a019d7-whisker-ca-bundle\") pod \"whisker-c5ddb74b7-r7lmz\" (UID: \"fc024b19-433a-4c77-9817-d8df23a019d7\") " pod="calico-system/whisker-c5ddb74b7-r7lmz" May 16 05:00:52.496098 kubelet[2661]: I0516 05:00:52.495503 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pppz7\" (UniqueName: \"kubernetes.io/projected/fc024b19-433a-4c77-9817-d8df23a019d7-kube-api-access-pppz7\") pod \"whisker-c5ddb74b7-r7lmz\" (UID: \"fc024b19-433a-4c77-9817-d8df23a019d7\") " pod="calico-system/whisker-c5ddb74b7-r7lmz" May 16 05:00:52.496098 kubelet[2661]: I0516 05:00:52.495525 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbqwt\" (UniqueName: \"kubernetes.io/projected/103355f8-b44d-4322-8416-0a85611bc48f-kube-api-access-qbqwt\") pod \"calico-apiserver-d7f886b9f-6ssw6\" (UID: \"103355f8-b44d-4322-8416-0a85611bc48f\") " pod="calico-apiserver/calico-apiserver-d7f886b9f-6ssw6" May 16 05:00:52.496333 kubelet[2661]: I0516 05:00:52.496315 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73-config\") pod \"goldmane-8f77d7b6c-nwhgj\" (UID: \"3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73\") " pod="calico-system/goldmane-8f77d7b6c-nwhgj" May 16 05:00:52.496576 kubelet[2661]: I0516 05:00:52.496557 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wm7p\" (UniqueName: \"kubernetes.io/projected/18d45456-22c3-4417-8b57-504e65cd40a3-kube-api-access-8wm7p\") pod \"calico-apiserver-d7f886b9f-r957p\" (UID: \"18d45456-22c3-4417-8b57-504e65cd40a3\") " pod="calico-apiserver/calico-apiserver-d7f886b9f-r957p" May 16 05:00:52.496778 kubelet[2661]: I0516 05:00:52.496763 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc024b19-433a-4c77-9817-d8df23a019d7-whisker-backend-key-pair\") pod \"whisker-c5ddb74b7-r7lmz\" (UID: \"fc024b19-433a-4c77-9817-d8df23a019d7\") " pod="calico-system/whisker-c5ddb74b7-r7lmz" May 16 05:00:52.496992 kubelet[2661]: I0516 05:00:52.496976 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73-goldmane-key-pair\") pod \"goldmane-8f77d7b6c-nwhgj\" (UID: \"3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73\") " pod="calico-system/goldmane-8f77d7b6c-nwhgj" May 16 05:00:52.497728 kubelet[2661]: I0516 05:00:52.497456 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/704fe550-765d-43cc-8dcb-1eabf4abbe0c-config-volume\") pod \"coredns-7c65d6cfc9-lvp4t\" (UID: \"704fe550-765d-43cc-8dcb-1eabf4abbe0c\") " pod="kube-system/coredns-7c65d6cfc9-lvp4t" May 16 05:00:52.498094 kubelet[2661]: I0516 05:00:52.498073 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvpw6\" (UniqueName: \"kubernetes.io/projected/704fe550-765d-43cc-8dcb-1eabf4abbe0c-kube-api-access-dvpw6\") pod \"coredns-7c65d6cfc9-lvp4t\" (UID: \"704fe550-765d-43cc-8dcb-1eabf4abbe0c\") " pod="kube-system/coredns-7c65d6cfc9-lvp4t" May 16 05:00:52.498222 kubelet[2661]: I0516 05:00:52.498209 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eafc01ac-e2de-4f92-86b0-314fd75a243e-config-volume\") pod \"coredns-7c65d6cfc9-22d75\" (UID: \"eafc01ac-e2de-4f92-86b0-314fd75a243e\") " pod="kube-system/coredns-7c65d6cfc9-22d75" May 16 05:00:52.498346 kubelet[2661]: I0516 05:00:52.498333 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w259n\" (UniqueName: \"kubernetes.io/projected/eafc01ac-e2de-4f92-86b0-314fd75a243e-kube-api-access-w259n\") pod \"coredns-7c65d6cfc9-22d75\" (UID: \"eafc01ac-e2de-4f92-86b0-314fd75a243e\") " pod="kube-system/coredns-7c65d6cfc9-22d75" May 16 05:00:52.498498 kubelet[2661]: I0516 05:00:52.498445 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f7a9ab8-1325-4d5f-862e-18057d14ccb1-tigera-ca-bundle\") pod \"calico-kube-controllers-f4f5c4487-mct8b\" (UID: \"6f7a9ab8-1325-4d5f-862e-18057d14ccb1\") " pod="calico-system/calico-kube-controllers-f4f5c4487-mct8b" May 16 05:00:52.498498 kubelet[2661]: I0516 05:00:52.498471 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/18d45456-22c3-4417-8b57-504e65cd40a3-calico-apiserver-certs\") pod \"calico-apiserver-d7f886b9f-r957p\" (UID: \"18d45456-22c3-4417-8b57-504e65cd40a3\") " pod="calico-apiserver/calico-apiserver-d7f886b9f-r957p" May 16 05:00:52.499043 kubelet[2661]: I0516 05:00:52.498910 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lv9wg\" (UniqueName: \"kubernetes.io/projected/6f7a9ab8-1325-4d5f-862e-18057d14ccb1-kube-api-access-lv9wg\") pod \"calico-kube-controllers-f4f5c4487-mct8b\" (UID: \"6f7a9ab8-1325-4d5f-862e-18057d14ccb1\") " pod="calico-system/calico-kube-controllers-f4f5c4487-mct8b" May 16 05:00:52.499043 kubelet[2661]: I0516 05:00:52.498949 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcx28\" (UniqueName: \"kubernetes.io/projected/3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73-kube-api-access-xcx28\") pod \"goldmane-8f77d7b6c-nwhgj\" (UID: \"3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73\") " pod="calico-system/goldmane-8f77d7b6c-nwhgj" May 16 05:00:52.499043 kubelet[2661]: I0516 05:00:52.498973 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/103355f8-b44d-4322-8416-0a85611bc48f-calico-apiserver-certs\") pod \"calico-apiserver-d7f886b9f-6ssw6\" (UID: \"103355f8-b44d-4322-8416-0a85611bc48f\") " pod="calico-apiserver/calico-apiserver-d7f886b9f-6ssw6" May 16 05:00:52.499043 kubelet[2661]: I0516 05:00:52.498990 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73-goldmane-ca-bundle\") pod \"goldmane-8f77d7b6c-nwhgj\" (UID: \"3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73\") " pod="calico-system/goldmane-8f77d7b6c-nwhgj" May 16 05:00:52.500086 systemd[1]: Created slice kubepods-besteffort-pod18d45456_22c3_4417_8b57_504e65cd40a3.slice - libcontainer container kubepods-besteffort-pod18d45456_22c3_4417_8b57_504e65cd40a3.slice. May 16 05:00:52.506964 systemd[1]: Created slice kubepods-besteffort-podfc024b19_433a_4c77_9817_d8df23a019d7.slice - libcontainer container kubepods-besteffort-podfc024b19_433a_4c77_9817_d8df23a019d7.slice. May 16 05:00:52.769399 containerd[1519]: time="2025-05-16T05:00:52.769285920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lvp4t,Uid:704fe550-765d-43cc-8dcb-1eabf4abbe0c,Namespace:kube-system,Attempt:0,}" May 16 05:00:52.775015 containerd[1519]: time="2025-05-16T05:00:52.774983958Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-22d75,Uid:eafc01ac-e2de-4f92-86b0-314fd75a243e,Namespace:kube-system,Attempt:0,}" May 16 05:00:52.780813 containerd[1519]: time="2025-05-16T05:00:52.780196748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7f886b9f-6ssw6,Uid:103355f8-b44d-4322-8416-0a85611bc48f,Namespace:calico-apiserver,Attempt:0,}" May 16 05:00:52.807469 containerd[1519]: time="2025-05-16T05:00:52.807436020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7f886b9f-r957p,Uid:18d45456-22c3-4417-8b57-504e65cd40a3,Namespace:calico-apiserver,Attempt:0,}" May 16 05:00:52.820699 containerd[1519]: time="2025-05-16T05:00:52.820657742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f4f5c4487-mct8b,Uid:6f7a9ab8-1325-4d5f-862e-18057d14ccb1,Namespace:calico-system,Attempt:0,}" May 16 05:00:52.820824 containerd[1519]: time="2025-05-16T05:00:52.820806052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-nwhgj,Uid:3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73,Namespace:calico-system,Attempt:0,}" May 16 05:00:52.820953 containerd[1519]: time="2025-05-16T05:00:52.820935524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5ddb74b7-r7lmz,Uid:fc024b19-433a-4c77-9817-d8df23a019d7,Namespace:calico-system,Attempt:0,}" May 16 05:00:53.075878 containerd[1519]: time="2025-05-16T05:00:53.074251711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 16 05:00:53.172987 containerd[1519]: time="2025-05-16T05:00:53.172695857Z" level=error msg="Failed to destroy network for sandbox \"d069764834b5c507fe934f858c2000dd6f81777a4e64604d17b0e51ccc5fac2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.174680 containerd[1519]: time="2025-05-16T05:00:53.174635342Z" level=error msg="Failed to destroy network for sandbox \"3a793e58ea76f99403d1b8ea25c9685fa55184f7ac5e493449b05d7761bf34ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.176394 containerd[1519]: time="2025-05-16T05:00:53.176357200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-22d75,Uid:eafc01ac-e2de-4f92-86b0-314fd75a243e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d069764834b5c507fe934f858c2000dd6f81777a4e64604d17b0e51ccc5fac2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.177126 containerd[1519]: time="2025-05-16T05:00:53.177007601Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7f886b9f-r957p,Uid:18d45456-22c3-4417-8b57-504e65cd40a3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a793e58ea76f99403d1b8ea25c9685fa55184f7ac5e493449b05d7761bf34ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.177272 containerd[1519]: time="2025-05-16T05:00:53.177219268Z" level=error msg="Failed to destroy network for sandbox \"9dfee0d9a969e0f7d22413e9583758f51d243a19e4cd39d8f65ff1291482a685\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.178261 containerd[1519]: time="2025-05-16T05:00:53.178186571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lvp4t,Uid:704fe550-765d-43cc-8dcb-1eabf4abbe0c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dfee0d9a969e0f7d22413e9583758f51d243a19e4cd39d8f65ff1291482a685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.179270 kubelet[2661]: E0516 05:00:53.179190 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dfee0d9a969e0f7d22413e9583758f51d243a19e4cd39d8f65ff1291482a685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.179270 kubelet[2661]: E0516 05:00:53.179243 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a793e58ea76f99403d1b8ea25c9685fa55184f7ac5e493449b05d7761bf34ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.179581 kubelet[2661]: E0516 05:00:53.179202 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d069764834b5c507fe934f858c2000dd6f81777a4e64604d17b0e51ccc5fac2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.179994 kubelet[2661]: E0516 05:00:53.179948 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a793e58ea76f99403d1b8ea25c9685fa55184f7ac5e493449b05d7761bf34ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d7f886b9f-r957p" May 16 05:00:53.180067 kubelet[2661]: E0516 05:00:53.179993 2661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a793e58ea76f99403d1b8ea25c9685fa55184f7ac5e493449b05d7761bf34ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d7f886b9f-r957p" May 16 05:00:53.180067 kubelet[2661]: E0516 05:00:53.180043 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d7f886b9f-r957p_calico-apiserver(18d45456-22c3-4417-8b57-504e65cd40a3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d7f886b9f-r957p_calico-apiserver(18d45456-22c3-4417-8b57-504e65cd40a3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a793e58ea76f99403d1b8ea25c9685fa55184f7ac5e493449b05d7761bf34ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d7f886b9f-r957p" podUID="18d45456-22c3-4417-8b57-504e65cd40a3" May 16 05:00:53.181309 kubelet[2661]: E0516 05:00:53.180278 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d069764834b5c507fe934f858c2000dd6f81777a4e64604d17b0e51ccc5fac2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-22d75" May 16 05:00:53.181309 kubelet[2661]: E0516 05:00:53.180310 2661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d069764834b5c507fe934f858c2000dd6f81777a4e64604d17b0e51ccc5fac2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-22d75" May 16 05:00:53.181309 kubelet[2661]: E0516 05:00:53.180370 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-22d75_kube-system(eafc01ac-e2de-4f92-86b0-314fd75a243e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-22d75_kube-system(eafc01ac-e2de-4f92-86b0-314fd75a243e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d069764834b5c507fe934f858c2000dd6f81777a4e64604d17b0e51ccc5fac2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-22d75" podUID="eafc01ac-e2de-4f92-86b0-314fd75a243e" May 16 05:00:53.181492 kubelet[2661]: E0516 05:00:53.181161 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dfee0d9a969e0f7d22413e9583758f51d243a19e4cd39d8f65ff1291482a685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lvp4t" May 16 05:00:53.181492 kubelet[2661]: E0516 05:00:53.181199 2661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dfee0d9a969e0f7d22413e9583758f51d243a19e4cd39d8f65ff1291482a685\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-lvp4t" May 16 05:00:53.181492 kubelet[2661]: E0516 05:00:53.181327 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-lvp4t_kube-system(704fe550-765d-43cc-8dcb-1eabf4abbe0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-lvp4t_kube-system(704fe550-765d-43cc-8dcb-1eabf4abbe0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dfee0d9a969e0f7d22413e9583758f51d243a19e4cd39d8f65ff1291482a685\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-lvp4t" podUID="704fe550-765d-43cc-8dcb-1eabf4abbe0c" May 16 05:00:53.184509 containerd[1519]: time="2025-05-16T05:00:53.184227772Z" level=error msg="Failed to destroy network for sandbox \"4f81a3df76fef1b1602ef109e10fbba9743ef5fbdc4cb7b0e2ff4f44afa0a0b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.185683 containerd[1519]: time="2025-05-16T05:00:53.185630648Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-nwhgj,Uid:3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f81a3df76fef1b1602ef109e10fbba9743ef5fbdc4cb7b0e2ff4f44afa0a0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.185938 kubelet[2661]: E0516 05:00:53.185845 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f81a3df76fef1b1602ef109e10fbba9743ef5fbdc4cb7b0e2ff4f44afa0a0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.185938 kubelet[2661]: E0516 05:00:53.185928 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f81a3df76fef1b1602ef109e10fbba9743ef5fbdc4cb7b0e2ff4f44afa0a0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-nwhgj" May 16 05:00:53.186018 kubelet[2661]: E0516 05:00:53.185946 2661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4f81a3df76fef1b1602ef109e10fbba9743ef5fbdc4cb7b0e2ff4f44afa0a0b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-8f77d7b6c-nwhgj" May 16 05:00:53.186018 kubelet[2661]: E0516 05:00:53.185980 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-8f77d7b6c-nwhgj_calico-system(3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-8f77d7b6c-nwhgj_calico-system(3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4f81a3df76fef1b1602ef109e10fbba9743ef5fbdc4cb7b0e2ff4f44afa0a0b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-8f77d7b6c-nwhgj" podUID="3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73" May 16 05:00:53.188755 containerd[1519]: time="2025-05-16T05:00:53.188714385Z" level=error msg="Failed to destroy network for sandbox \"4eaed3d5a890f3a53f6d5c3489bf7fc7d78bf1a89593905ba7178771f9d921e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.189828 containerd[1519]: time="2025-05-16T05:00:53.189771922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7f886b9f-6ssw6,Uid:103355f8-b44d-4322-8416-0a85611bc48f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eaed3d5a890f3a53f6d5c3489bf7fc7d78bf1a89593905ba7178771f9d921e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.190985 kubelet[2661]: E0516 05:00:53.190953 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eaed3d5a890f3a53f6d5c3489bf7fc7d78bf1a89593905ba7178771f9d921e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.191076 kubelet[2661]: E0516 05:00:53.191004 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eaed3d5a890f3a53f6d5c3489bf7fc7d78bf1a89593905ba7178771f9d921e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d7f886b9f-6ssw6" May 16 05:00:53.191076 kubelet[2661]: E0516 05:00:53.191023 2661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eaed3d5a890f3a53f6d5c3489bf7fc7d78bf1a89593905ba7178771f9d921e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d7f886b9f-6ssw6" May 16 05:00:53.191125 kubelet[2661]: E0516 05:00:53.191083 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d7f886b9f-6ssw6_calico-apiserver(103355f8-b44d-4322-8416-0a85611bc48f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d7f886b9f-6ssw6_calico-apiserver(103355f8-b44d-4322-8416-0a85611bc48f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4eaed3d5a890f3a53f6d5c3489bf7fc7d78bf1a89593905ba7178771f9d921e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d7f886b9f-6ssw6" podUID="103355f8-b44d-4322-8416-0a85611bc48f" May 16 05:00:53.191795 containerd[1519]: time="2025-05-16T05:00:53.191753124Z" level=error msg="Failed to destroy network for sandbox \"8648daad3b7da707beaf05182a6ca2e8ead0f026d283cf6f2873c94367e362ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.192812 containerd[1519]: time="2025-05-16T05:00:53.192777023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5ddb74b7-r7lmz,Uid:fc024b19-433a-4c77-9817-d8df23a019d7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8648daad3b7da707beaf05182a6ca2e8ead0f026d283cf6f2873c94367e362ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.192978 kubelet[2661]: E0516 05:00:53.192953 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8648daad3b7da707beaf05182a6ca2e8ead0f026d283cf6f2873c94367e362ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.193015 kubelet[2661]: E0516 05:00:53.192988 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8648daad3b7da707beaf05182a6ca2e8ead0f026d283cf6f2873c94367e362ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c5ddb74b7-r7lmz" May 16 05:00:53.193015 kubelet[2661]: E0516 05:00:53.193005 2661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8648daad3b7da707beaf05182a6ca2e8ead0f026d283cf6f2873c94367e362ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c5ddb74b7-r7lmz" May 16 05:00:53.193058 kubelet[2661]: E0516 05:00:53.193038 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c5ddb74b7-r7lmz_calico-system(fc024b19-433a-4c77-9817-d8df23a019d7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c5ddb74b7-r7lmz_calico-system(fc024b19-433a-4c77-9817-d8df23a019d7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8648daad3b7da707beaf05182a6ca2e8ead0f026d283cf6f2873c94367e362ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c5ddb74b7-r7lmz" podUID="fc024b19-433a-4c77-9817-d8df23a019d7" May 16 05:00:53.196487 containerd[1519]: time="2025-05-16T05:00:53.196410207Z" level=error msg="Failed to destroy network for sandbox \"0c39be99347dc5978945fdc3f59c21871630bca7fe9755a2b4f28ef28007cf15\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.197280 containerd[1519]: time="2025-05-16T05:00:53.197240558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f4f5c4487-mct8b,Uid:6f7a9ab8-1325-4d5f-862e-18057d14ccb1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c39be99347dc5978945fdc3f59c21871630bca7fe9755a2b4f28ef28007cf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.197519 kubelet[2661]: E0516 05:00:53.197492 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c39be99347dc5978945fdc3f59c21871630bca7fe9755a2b4f28ef28007cf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.197581 kubelet[2661]: E0516 05:00:53.197531 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c39be99347dc5978945fdc3f59c21871630bca7fe9755a2b4f28ef28007cf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f4f5c4487-mct8b" May 16 05:00:53.197581 kubelet[2661]: E0516 05:00:53.197548 2661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c39be99347dc5978945fdc3f59c21871630bca7fe9755a2b4f28ef28007cf15\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-f4f5c4487-mct8b" May 16 05:00:53.197654 kubelet[2661]: E0516 05:00:53.197581 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-f4f5c4487-mct8b_calico-system(6f7a9ab8-1325-4d5f-862e-18057d14ccb1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-f4f5c4487-mct8b_calico-system(6f7a9ab8-1325-4d5f-862e-18057d14ccb1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c39be99347dc5978945fdc3f59c21871630bca7fe9755a2b4f28ef28007cf15\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-f4f5c4487-mct8b" podUID="6f7a9ab8-1325-4d5f-862e-18057d14ccb1" May 16 05:00:53.690691 systemd[1]: run-netns-cni\x2d71c5e0d7\x2ddadb\x2d79cd\x2d9414\x2d012ee901886b.mount: Deactivated successfully. May 16 05:00:53.690789 systemd[1]: run-netns-cni\x2df7a696d9\x2df9f5\x2d5bf3\x2dc021\x2dd68cf16d643a.mount: Deactivated successfully. May 16 05:00:53.690836 systemd[1]: run-netns-cni\x2ddde9c999\x2d9fa9\x2d0398\x2d5647\x2d47a373f18f37.mount: Deactivated successfully. May 16 05:00:53.690878 systemd[1]: run-netns-cni\x2deb68d060\x2d04d7\x2dc275\x2d9625\x2d85b9868a5a22.mount: Deactivated successfully. May 16 05:00:53.914349 systemd[1]: Created slice kubepods-besteffort-pode6af650f_1da2_49ae_aae7_f7207bc906df.slice - libcontainer container kubepods-besteffort-pode6af650f_1da2_49ae_aae7_f7207bc906df.slice. May 16 05:00:53.917040 containerd[1519]: time="2025-05-16T05:00:53.916789533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5jxv4,Uid:e6af650f-1da2-49ae-aae7-f7207bc906df,Namespace:calico-system,Attempt:0,}" May 16 05:00:53.956905 containerd[1519]: time="2025-05-16T05:00:53.956860111Z" level=error msg="Failed to destroy network for sandbox \"97bf182130bd6da2a283a2fd45d5cc52802e628dc664f25f562a043a10556217\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.958681 systemd[1]: run-netns-cni\x2dc6f05db9\x2d2c18\x2dd8e6\x2d2e45\x2d87dd1f1c15cf.mount: Deactivated successfully. May 16 05:00:53.959830 containerd[1519]: time="2025-05-16T05:00:53.959725940Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5jxv4,Uid:e6af650f-1da2-49ae-aae7-f7207bc906df,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97bf182130bd6da2a283a2fd45d5cc52802e628dc664f25f562a043a10556217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.960017 kubelet[2661]: E0516 05:00:53.959978 2661 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97bf182130bd6da2a283a2fd45d5cc52802e628dc664f25f562a043a10556217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 16 05:00:53.960063 kubelet[2661]: E0516 05:00:53.960036 2661 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97bf182130bd6da2a283a2fd45d5cc52802e628dc664f25f562a043a10556217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5jxv4" May 16 05:00:53.960104 kubelet[2661]: E0516 05:00:53.960061 2661 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97bf182130bd6da2a283a2fd45d5cc52802e628dc664f25f562a043a10556217\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5jxv4" May 16 05:00:53.960135 kubelet[2661]: E0516 05:00:53.960111 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5jxv4_calico-system(e6af650f-1da2-49ae-aae7-f7207bc906df)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5jxv4_calico-system(e6af650f-1da2-49ae-aae7-f7207bc906df)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97bf182130bd6da2a283a2fd45d5cc52802e628dc664f25f562a043a10556217\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5jxv4" podUID="e6af650f-1da2-49ae-aae7-f7207bc906df" May 16 05:00:57.110035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1462916237.mount: Deactivated successfully. May 16 05:00:57.241879 containerd[1519]: time="2025-05-16T05:00:57.241830340Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 16 05:00:57.244993 containerd[1519]: time="2025-05-16T05:00:57.244952967Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 4.170657338s" May 16 05:00:57.244993 containerd[1519]: time="2025-05-16T05:00:57.244988486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 16 05:00:57.251959 containerd[1519]: time="2025-05-16T05:00:57.251676851Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:57.253252 containerd[1519]: time="2025-05-16T05:00:57.252702994Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:57.253364 containerd[1519]: time="2025-05-16T05:00:57.253341503Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:00:57.262256 containerd[1519]: time="2025-05-16T05:00:57.262203311Z" level=info msg="CreateContainer within sandbox \"bc426f9206a136af867225de8eba0244ec04224c80bf8ed9c43ef49c3c097f0b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 16 05:00:57.287129 containerd[1519]: time="2025-05-16T05:00:57.287086404Z" level=info msg="Container 4b0736176feb65d7621fd07a159d2f2154bb9bc66274d1423b3aaa31b63facd7: CDI devices from CRI Config.CDIDevices: []" May 16 05:00:57.288588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845500622.mount: Deactivated successfully. May 16 05:00:57.295980 containerd[1519]: time="2025-05-16T05:00:57.295938092Z" level=info msg="CreateContainer within sandbox \"bc426f9206a136af867225de8eba0244ec04224c80bf8ed9c43ef49c3c097f0b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4b0736176feb65d7621fd07a159d2f2154bb9bc66274d1423b3aaa31b63facd7\"" May 16 05:00:57.296757 containerd[1519]: time="2025-05-16T05:00:57.296730678Z" level=info msg="StartContainer for \"4b0736176feb65d7621fd07a159d2f2154bb9bc66274d1423b3aaa31b63facd7\"" May 16 05:00:57.298161 containerd[1519]: time="2025-05-16T05:00:57.298134054Z" level=info msg="connecting to shim 4b0736176feb65d7621fd07a159d2f2154bb9bc66274d1423b3aaa31b63facd7" address="unix:///run/containerd/s/1ac5b8dc61e2ea9c927782247be973896a670e927a827f2f79d8d37288c94903" protocol=ttrpc version=3 May 16 05:00:57.318439 systemd[1]: Started cri-containerd-4b0736176feb65d7621fd07a159d2f2154bb9bc66274d1423b3aaa31b63facd7.scope - libcontainer container 4b0736176feb65d7621fd07a159d2f2154bb9bc66274d1423b3aaa31b63facd7. May 16 05:00:57.351943 containerd[1519]: time="2025-05-16T05:00:57.351903652Z" level=info msg="StartContainer for \"4b0736176feb65d7621fd07a159d2f2154bb9bc66274d1423b3aaa31b63facd7\" returns successfully" May 16 05:00:57.537935 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 16 05:00:57.538070 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 16 05:00:57.758875 kubelet[2661]: I0516 05:00:57.758752 2661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc024b19-433a-4c77-9817-d8df23a019d7-whisker-ca-bundle\") pod \"fc024b19-433a-4c77-9817-d8df23a019d7\" (UID: \"fc024b19-433a-4c77-9817-d8df23a019d7\") " May 16 05:00:57.758875 kubelet[2661]: I0516 05:00:57.758820 2661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pppz7\" (UniqueName: \"kubernetes.io/projected/fc024b19-433a-4c77-9817-d8df23a019d7-kube-api-access-pppz7\") pod \"fc024b19-433a-4c77-9817-d8df23a019d7\" (UID: \"fc024b19-433a-4c77-9817-d8df23a019d7\") " May 16 05:00:57.758875 kubelet[2661]: I0516 05:00:57.758841 2661 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc024b19-433a-4c77-9817-d8df23a019d7-whisker-backend-key-pair\") pod \"fc024b19-433a-4c77-9817-d8df23a019d7\" (UID: \"fc024b19-433a-4c77-9817-d8df23a019d7\") " May 16 05:00:57.769741 kubelet[2661]: I0516 05:00:57.769676 2661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc024b19-433a-4c77-9817-d8df23a019d7-kube-api-access-pppz7" (OuterVolumeSpecName: "kube-api-access-pppz7") pod "fc024b19-433a-4c77-9817-d8df23a019d7" (UID: "fc024b19-433a-4c77-9817-d8df23a019d7"). InnerVolumeSpecName "kube-api-access-pppz7". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 05:00:57.770935 kubelet[2661]: I0516 05:00:57.770889 2661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc024b19-433a-4c77-9817-d8df23a019d7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "fc024b19-433a-4c77-9817-d8df23a019d7" (UID: "fc024b19-433a-4c77-9817-d8df23a019d7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 05:00:57.780853 kubelet[2661]: I0516 05:00:57.780804 2661 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc024b19-433a-4c77-9817-d8df23a019d7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "fc024b19-433a-4c77-9817-d8df23a019d7" (UID: "fc024b19-433a-4c77-9817-d8df23a019d7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 05:00:57.860335 kubelet[2661]: I0516 05:00:57.860207 2661 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc024b19-433a-4c77-9817-d8df23a019d7-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 16 05:00:57.860335 kubelet[2661]: I0516 05:00:57.860255 2661 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/fc024b19-433a-4c77-9817-d8df23a019d7-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 16 05:00:57.860335 kubelet[2661]: I0516 05:00:57.860266 2661 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pppz7\" (UniqueName: \"kubernetes.io/projected/fc024b19-433a-4c77-9817-d8df23a019d7-kube-api-access-pppz7\") on node \"localhost\" DevicePath \"\"" May 16 05:00:58.055068 systemd[1]: Removed slice kubepods-besteffort-podfc024b19_433a_4c77_9817_d8df23a019d7.slice - libcontainer container kubepods-besteffort-podfc024b19_433a_4c77_9817_d8df23a019d7.slice. May 16 05:00:58.072644 kubelet[2661]: I0516 05:00:58.072592 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5t5jx" podStartSLOduration=1.574681481 podStartE2EDuration="14.07257648s" podCreationTimestamp="2025-05-16 05:00:44 +0000 UTC" firstStartedPulling="2025-05-16 05:00:44.749646043 +0000 UTC m=+19.913930991" lastFinishedPulling="2025-05-16 05:00:57.247541042 +0000 UTC m=+32.411825990" observedRunningTime="2025-05-16 05:00:58.068853462 +0000 UTC m=+33.233138410" watchObservedRunningTime="2025-05-16 05:00:58.07257648 +0000 UTC m=+33.236861428" May 16 05:00:58.113503 systemd[1]: var-lib-kubelet-pods-fc024b19\x2d433a\x2d4c77\x2d9817\x2dd8df23a019d7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpppz7.mount: Deactivated successfully. May 16 05:00:58.113590 systemd[1]: var-lib-kubelet-pods-fc024b19\x2d433a\x2d4c77\x2d9817\x2dd8df23a019d7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 16 05:00:58.123591 systemd[1]: Created slice kubepods-besteffort-pod5f454e4d_8aa4_46c9_8e87_444cb605f063.slice - libcontainer container kubepods-besteffort-pod5f454e4d_8aa4_46c9_8e87_444cb605f063.slice. May 16 05:00:58.162725 kubelet[2661]: I0516 05:00:58.162675 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5f454e4d-8aa4-46c9-8e87-444cb605f063-whisker-backend-key-pair\") pod \"whisker-56cbc7d65f-plq4q\" (UID: \"5f454e4d-8aa4-46c9-8e87-444cb605f063\") " pod="calico-system/whisker-56cbc7d65f-plq4q" May 16 05:00:58.162725 kubelet[2661]: I0516 05:00:58.162732 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdf96\" (UniqueName: \"kubernetes.io/projected/5f454e4d-8aa4-46c9-8e87-444cb605f063-kube-api-access-xdf96\") pod \"whisker-56cbc7d65f-plq4q\" (UID: \"5f454e4d-8aa4-46c9-8e87-444cb605f063\") " pod="calico-system/whisker-56cbc7d65f-plq4q" May 16 05:00:58.162854 kubelet[2661]: I0516 05:00:58.162776 2661 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f454e4d-8aa4-46c9-8e87-444cb605f063-whisker-ca-bundle\") pod \"whisker-56cbc7d65f-plq4q\" (UID: \"5f454e4d-8aa4-46c9-8e87-444cb605f063\") " pod="calico-system/whisker-56cbc7d65f-plq4q" May 16 05:00:58.426764 containerd[1519]: time="2025-05-16T05:00:58.426653454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56cbc7d65f-plq4q,Uid:5f454e4d-8aa4-46c9-8e87-444cb605f063,Namespace:calico-system,Attempt:0,}" May 16 05:00:58.608803 systemd-networkd[1449]: cali0322ec97120: Link UP May 16 05:00:58.609044 systemd-networkd[1449]: cali0322ec97120: Gained carrier May 16 05:00:58.621837 containerd[1519]: 2025-05-16 05:00:58.447 [INFO][3754] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 16 05:00:58.621837 containerd[1519]: 2025-05-16 05:00:58.487 [INFO][3754] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--56cbc7d65f--plq4q-eth0 whisker-56cbc7d65f- calico-system 5f454e4d-8aa4-46c9-8e87-444cb605f063 850 0 2025-05-16 05:00:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:56cbc7d65f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-56cbc7d65f-plq4q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0322ec97120 [] [] }} ContainerID="75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" Namespace="calico-system" Pod="whisker-56cbc7d65f-plq4q" WorkloadEndpoint="localhost-k8s-whisker--56cbc7d65f--plq4q-" May 16 05:00:58.621837 containerd[1519]: 2025-05-16 05:00:58.487 [INFO][3754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" Namespace="calico-system" Pod="whisker-56cbc7d65f-plq4q" WorkloadEndpoint="localhost-k8s-whisker--56cbc7d65f--plq4q-eth0" May 16 05:00:58.621837 containerd[1519]: 2025-05-16 05:00:58.564 [INFO][3768] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" HandleID="k8s-pod-network.75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" Workload="localhost-k8s-whisker--56cbc7d65f--plq4q-eth0" May 16 05:00:58.622045 containerd[1519]: 2025-05-16 05:00:58.565 [INFO][3768] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" HandleID="k8s-pod-network.75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" Workload="localhost-k8s-whisker--56cbc7d65f--plq4q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400039e310), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-56cbc7d65f-plq4q", "timestamp":"2025-05-16 05:00:58.564930307 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 05:00:58.622045 containerd[1519]: 2025-05-16 05:00:58.565 [INFO][3768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 05:00:58.622045 containerd[1519]: 2025-05-16 05:00:58.565 [INFO][3768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 05:00:58.622045 containerd[1519]: 2025-05-16 05:00:58.565 [INFO][3768] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 05:00:58.622045 containerd[1519]: 2025-05-16 05:00:58.578 [INFO][3768] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" host="localhost" May 16 05:00:58.622045 containerd[1519]: 2025-05-16 05:00:58.583 [INFO][3768] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 05:00:58.622045 containerd[1519]: 2025-05-16 05:00:58.586 [INFO][3768] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 05:00:58.622045 containerd[1519]: 2025-05-16 05:00:58.588 [INFO][3768] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 05:00:58.622045 containerd[1519]: 2025-05-16 05:00:58.590 [INFO][3768] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 05:00:58.622045 containerd[1519]: 2025-05-16 05:00:58.590 [INFO][3768] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" host="localhost" May 16 05:00:58.622254 containerd[1519]: 2025-05-16 05:00:58.591 [INFO][3768] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686 May 16 05:00:58.622254 containerd[1519]: 2025-05-16 05:00:58.594 [INFO][3768] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" host="localhost" May 16 05:00:58.622254 containerd[1519]: 2025-05-16 05:00:58.599 [INFO][3768] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" host="localhost" May 16 05:00:58.622254 containerd[1519]: 2025-05-16 05:00:58.599 [INFO][3768] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" host="localhost" May 16 05:00:58.622254 containerd[1519]: 2025-05-16 05:00:58.599 [INFO][3768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 05:00:58.622254 containerd[1519]: 2025-05-16 05:00:58.599 [INFO][3768] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" HandleID="k8s-pod-network.75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" Workload="localhost-k8s-whisker--56cbc7d65f--plq4q-eth0" May 16 05:00:58.622375 containerd[1519]: 2025-05-16 05:00:58.602 [INFO][3754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" Namespace="calico-system" Pod="whisker-56cbc7d65f-plq4q" WorkloadEndpoint="localhost-k8s-whisker--56cbc7d65f--plq4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--56cbc7d65f--plq4q-eth0", GenerateName:"whisker-56cbc7d65f-", Namespace:"calico-system", SelfLink:"", UID:"5f454e4d-8aa4-46c9-8e87-444cb605f063", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56cbc7d65f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-56cbc7d65f-plq4q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0322ec97120", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:00:58.622375 containerd[1519]: 2025-05-16 05:00:58.602 [INFO][3754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" Namespace="calico-system" Pod="whisker-56cbc7d65f-plq4q" WorkloadEndpoint="localhost-k8s-whisker--56cbc7d65f--plq4q-eth0" May 16 05:00:58.622440 containerd[1519]: 2025-05-16 05:00:58.602 [INFO][3754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0322ec97120 ContainerID="75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" Namespace="calico-system" Pod="whisker-56cbc7d65f-plq4q" WorkloadEndpoint="localhost-k8s-whisker--56cbc7d65f--plq4q-eth0" May 16 05:00:58.622440 containerd[1519]: 2025-05-16 05:00:58.609 [INFO][3754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" Namespace="calico-system" Pod="whisker-56cbc7d65f-plq4q" WorkloadEndpoint="localhost-k8s-whisker--56cbc7d65f--plq4q-eth0" May 16 05:00:58.622480 containerd[1519]: 2025-05-16 05:00:58.611 [INFO][3754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" Namespace="calico-system" Pod="whisker-56cbc7d65f-plq4q" WorkloadEndpoint="localhost-k8s-whisker--56cbc7d65f--plq4q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--56cbc7d65f--plq4q-eth0", GenerateName:"whisker-56cbc7d65f-", Namespace:"calico-system", SelfLink:"", UID:"5f454e4d-8aa4-46c9-8e87-444cb605f063", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"56cbc7d65f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686", Pod:"whisker-56cbc7d65f-plq4q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0322ec97120", MAC:"72:0e:d4:c6:79:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:00:58.622525 containerd[1519]: 2025-05-16 05:00:58.619 [INFO][3754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" Namespace="calico-system" Pod="whisker-56cbc7d65f-plq4q" WorkloadEndpoint="localhost-k8s-whisker--56cbc7d65f--plq4q-eth0" May 16 05:00:58.671032 containerd[1519]: time="2025-05-16T05:00:58.670975458Z" level=info msg="connecting to shim 75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686" address="unix:///run/containerd/s/092b5717cf5de1a4734ff3fe78dd478a421ef8b3f87c6f1a03b94525352dd241" namespace=k8s.io protocol=ttrpc version=3 May 16 05:00:58.696424 systemd[1]: Started cri-containerd-75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686.scope - libcontainer container 75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686. May 16 05:00:58.707813 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:00:58.730304 containerd[1519]: time="2025-05-16T05:00:58.730212070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-56cbc7d65f-plq4q,Uid:5f454e4d-8aa4-46c9-8e87-444cb605f063,Namespace:calico-system,Attempt:0,} returns sandbox id \"75eb8d6e4eded9d8729efee681f0c131cd8fa0e0a3a1a5301284b459d2d71686\"" May 16 05:00:58.737634 containerd[1519]: time="2025-05-16T05:00:58.737597947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 05:00:58.884600 containerd[1519]: time="2025-05-16T05:00:58.884444417Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 05:00:58.914747 kubelet[2661]: I0516 05:00:58.914603 2661 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc024b19-433a-4c77-9817-d8df23a019d7" path="/var/lib/kubelet/pods/fc024b19-433a-4c77-9817-d8df23a019d7/volumes" May 16 05:00:58.940507 containerd[1519]: time="2025-05-16T05:00:58.908421337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 05:00:58.940662 containerd[1519]: time="2025-05-16T05:00:58.908477736Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 05:00:58.941007 kubelet[2661]: E0516 05:00:58.940824 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 05:00:58.941149 kubelet[2661]: E0516 05:00:58.941123 2661 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 05:00:58.952134 kubelet[2661]: E0516 05:00:58.952055 2661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ef125e5441f1476585809f36c5a808ba,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdf96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56cbc7d65f-plq4q_calico-system(5f454e4d-8aa4-46c9-8e87-444cb605f063): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 05:00:58.955128 containerd[1519]: time="2025-05-16T05:00:58.955094518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 05:00:59.045832 kubelet[2661]: I0516 05:00:59.045797 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 05:00:59.166326 containerd[1519]: time="2025-05-16T05:00:59.166251872Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 05:00:59.167218 containerd[1519]: time="2025-05-16T05:00:59.167127858Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 05:00:59.167218 containerd[1519]: time="2025-05-16T05:00:59.167197456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 05:00:59.167410 kubelet[2661]: E0516 05:00:59.167356 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 05:00:59.167485 kubelet[2661]: E0516 05:00:59.167417 2661 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 05:00:59.167801 kubelet[2661]: E0516 05:00:59.167534 2661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdf96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56cbc7d65f-plq4q_calico-system(5f454e4d-8aa4-46c9-8e87-444cb605f063): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 05:00:59.169964 kubelet[2661]: E0516 05:00:59.169859 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-56cbc7d65f-plq4q" podUID="5f454e4d-8aa4-46c9-8e87-444cb605f063" May 16 05:01:00.054106 kubelet[2661]: E0516 05:01:00.053496 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-56cbc7d65f-plq4q" podUID="5f454e4d-8aa4-46c9-8e87-444cb605f063" May 16 05:01:00.155423 systemd-networkd[1449]: cali0322ec97120: Gained IPv6LL May 16 05:01:00.400108 kubelet[2661]: I0516 05:01:00.399988 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 05:01:01.388607 systemd-networkd[1449]: vxlan.calico: Link UP May 16 05:01:01.388621 systemd-networkd[1449]: vxlan.calico: Gained carrier May 16 05:01:03.163405 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL May 16 05:01:03.910109 containerd[1519]: time="2025-05-16T05:01:03.910040748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-nwhgj,Uid:3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73,Namespace:calico-system,Attempt:0,}" May 16 05:01:04.051865 systemd-networkd[1449]: cali6d62daefd27: Link UP May 16 05:01:04.052441 systemd-networkd[1449]: cali6d62daefd27: Gained carrier May 16 05:01:04.066082 containerd[1519]: 2025-05-16 05:01:03.983 [INFO][4100] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0 goldmane-8f77d7b6c- calico-system 3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73 788 0 2025-05-16 05:00:44 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:8f77d7b6c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-8f77d7b6c-nwhgj eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6d62daefd27 [] [] }} ContainerID="d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-nwhgj" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--nwhgj-" May 16 05:01:04.066082 containerd[1519]: 2025-05-16 05:01:03.983 [INFO][4100] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-nwhgj" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0" May 16 05:01:04.066082 containerd[1519]: 2025-05-16 05:01:04.012 [INFO][4114] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" HandleID="k8s-pod-network.d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" Workload="localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0" May 16 05:01:04.066317 containerd[1519]: 2025-05-16 05:01:04.013 [INFO][4114] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" HandleID="k8s-pod-network.d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" Workload="localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c750), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-8f77d7b6c-nwhgj", "timestamp":"2025-05-16 05:01:04.012877621 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 05:01:04.066317 containerd[1519]: 2025-05-16 05:01:04.013 [INFO][4114] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 05:01:04.066317 containerd[1519]: 2025-05-16 05:01:04.013 [INFO][4114] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 05:01:04.066317 containerd[1519]: 2025-05-16 05:01:04.013 [INFO][4114] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 05:01:04.066317 containerd[1519]: 2025-05-16 05:01:04.022 [INFO][4114] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" host="localhost" May 16 05:01:04.066317 containerd[1519]: 2025-05-16 05:01:04.027 [INFO][4114] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 05:01:04.066317 containerd[1519]: 2025-05-16 05:01:04.031 [INFO][4114] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 05:01:04.066317 containerd[1519]: 2025-05-16 05:01:04.033 [INFO][4114] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 05:01:04.066317 containerd[1519]: 2025-05-16 05:01:04.035 [INFO][4114] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 05:01:04.066317 containerd[1519]: 2025-05-16 05:01:04.035 [INFO][4114] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" host="localhost" May 16 05:01:04.066913 containerd[1519]: 2025-05-16 05:01:04.037 [INFO][4114] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5 May 16 05:01:04.066913 containerd[1519]: 2025-05-16 05:01:04.040 [INFO][4114] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" host="localhost" May 16 05:01:04.066913 containerd[1519]: 2025-05-16 05:01:04.045 [INFO][4114] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" host="localhost" May 16 05:01:04.066913 containerd[1519]: 2025-05-16 05:01:04.046 [INFO][4114] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" host="localhost" May 16 05:01:04.066913 containerd[1519]: 2025-05-16 05:01:04.046 [INFO][4114] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 05:01:04.066913 containerd[1519]: 2025-05-16 05:01:04.046 [INFO][4114] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" HandleID="k8s-pod-network.d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" Workload="localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0" May 16 05:01:04.067247 containerd[1519]: 2025-05-16 05:01:04.049 [INFO][4100] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-nwhgj" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-8f77d7b6c-nwhgj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d62daefd27", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:04.067247 containerd[1519]: 2025-05-16 05:01:04.050 [INFO][4100] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-nwhgj" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0" May 16 05:01:04.067357 containerd[1519]: 2025-05-16 05:01:04.050 [INFO][4100] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6d62daefd27 ContainerID="d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-nwhgj" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0" May 16 05:01:04.067357 containerd[1519]: 2025-05-16 05:01:04.052 [INFO][4100] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-nwhgj" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0" May 16 05:01:04.067410 containerd[1519]: 2025-05-16 05:01:04.052 [INFO][4100] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-nwhgj" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0", GenerateName:"goldmane-8f77d7b6c-", Namespace:"calico-system", SelfLink:"", UID:"3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73", ResourceVersion:"788", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"8f77d7b6c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5", Pod:"goldmane-8f77d7b6c-nwhgj", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6d62daefd27", MAC:"ce:56:58:0a:fe:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:04.067462 containerd[1519]: 2025-05-16 05:01:04.062 [INFO][4100] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" Namespace="calico-system" Pod="goldmane-8f77d7b6c-nwhgj" WorkloadEndpoint="localhost-k8s-goldmane--8f77d7b6c--nwhgj-eth0" May 16 05:01:04.102069 containerd[1519]: time="2025-05-16T05:01:04.102010162Z" level=info msg="connecting to shim d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5" address="unix:///run/containerd/s/79c23bb2f4cfcb11014f5448575def75cec78a8531ea2141f1a3506d0d428202" namespace=k8s.io protocol=ttrpc version=3 May 16 05:01:04.137432 systemd[1]: Started cri-containerd-d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5.scope - libcontainer container d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5. May 16 05:01:04.148045 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:01:04.169299 containerd[1519]: time="2025-05-16T05:01:04.169137174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-8f77d7b6c-nwhgj,Uid:3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73,Namespace:calico-system,Attempt:0,} returns sandbox id \"d035ac423c442272aed48e2045ea71ad2cf1cb3e05dd9e739345e86a3dad05b5\"" May 16 05:01:04.171666 containerd[1519]: time="2025-05-16T05:01:04.171593340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 05:01:04.318088 containerd[1519]: time="2025-05-16T05:01:04.318044592Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 05:01:04.318811 containerd[1519]: time="2025-05-16T05:01:04.318758502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 05:01:04.318884 containerd[1519]: time="2025-05-16T05:01:04.318804181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 05:01:04.319029 kubelet[2661]: E0516 05:01:04.318982 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 05:01:04.319402 kubelet[2661]: E0516 05:01:04.319029 2661 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 05:01:04.319402 kubelet[2661]: E0516 05:01:04.319156 2661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-nwhgj_calico-system(3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 05:01:04.320409 kubelet[2661]: E0516 05:01:04.320366 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-nwhgj" podUID="3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73" May 16 05:01:05.075352 kubelet[2661]: E0516 05:01:05.075310 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-nwhgj" podUID="3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73" May 16 05:01:05.502624 systemd[1]: Started sshd@7-10.0.0.27:22-10.0.0.1:43076.service - OpenSSH per-connection server daemon (10.0.0.1:43076). May 16 05:01:05.571309 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 43076 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:05.572582 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:05.577266 systemd-logind[1505]: New session 8 of user core. May 16 05:01:05.586383 systemd[1]: Started session-8.scope - Session 8 of User core. May 16 05:01:05.729308 sshd[4187]: Connection closed by 10.0.0.1 port 43076 May 16 05:01:05.729877 sshd-session[4184]: pam_unix(sshd:session): session closed for user core May 16 05:01:05.733435 systemd-logind[1505]: Session 8 logged out. Waiting for processes to exit. May 16 05:01:05.733708 systemd[1]: sshd@7-10.0.0.27:22-10.0.0.1:43076.service: Deactivated successfully. May 16 05:01:05.736046 systemd[1]: session-8.scope: Deactivated successfully. May 16 05:01:05.737616 systemd-logind[1505]: Removed session 8. May 16 05:01:05.920256 containerd[1519]: time="2025-05-16T05:01:05.920132123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7f886b9f-r957p,Uid:18d45456-22c3-4417-8b57-504e65cd40a3,Namespace:calico-apiserver,Attempt:0,}" May 16 05:01:05.979403 systemd-networkd[1449]: cali6d62daefd27: Gained IPv6LL May 16 05:01:06.018055 systemd-networkd[1449]: cali904b9223ae7: Link UP May 16 05:01:06.018664 systemd-networkd[1449]: cali904b9223ae7: Gained carrier May 16 05:01:06.031648 containerd[1519]: 2025-05-16 05:01:05.958 [INFO][4200] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0 calico-apiserver-d7f886b9f- calico-apiserver 18d45456-22c3-4417-8b57-504e65cd40a3 791 0 2025-05-16 05:00:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d7f886b9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d7f886b9f-r957p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali904b9223ae7 [] [] }} ContainerID="b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-r957p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--r957p-" May 16 05:01:06.031648 containerd[1519]: 2025-05-16 05:01:05.958 [INFO][4200] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-r957p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0" May 16 05:01:06.031648 containerd[1519]: 2025-05-16 05:01:05.982 [INFO][4215] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" HandleID="k8s-pod-network.b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" Workload="localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0" May 16 05:01:06.031838 containerd[1519]: 2025-05-16 05:01:05.982 [INFO][4215] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" HandleID="k8s-pod-network.b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" Workload="localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a3470), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d7f886b9f-r957p", "timestamp":"2025-05-16 05:01:05.982070432 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 05:01:06.031838 containerd[1519]: 2025-05-16 05:01:05.982 [INFO][4215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 05:01:06.031838 containerd[1519]: 2025-05-16 05:01:05.982 [INFO][4215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 05:01:06.031838 containerd[1519]: 2025-05-16 05:01:05.982 [INFO][4215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 05:01:06.031838 containerd[1519]: 2025-05-16 05:01:05.991 [INFO][4215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" host="localhost" May 16 05:01:06.031838 containerd[1519]: 2025-05-16 05:01:05.995 [INFO][4215] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 05:01:06.031838 containerd[1519]: 2025-05-16 05:01:05.999 [INFO][4215] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 05:01:06.031838 containerd[1519]: 2025-05-16 05:01:06.001 [INFO][4215] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 05:01:06.031838 containerd[1519]: 2025-05-16 05:01:06.003 [INFO][4215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 05:01:06.031838 containerd[1519]: 2025-05-16 05:01:06.003 [INFO][4215] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" host="localhost" May 16 05:01:06.032045 containerd[1519]: 2025-05-16 05:01:06.005 [INFO][4215] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7 May 16 05:01:06.032045 containerd[1519]: 2025-05-16 05:01:06.008 [INFO][4215] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" host="localhost" May 16 05:01:06.032045 containerd[1519]: 2025-05-16 05:01:06.013 [INFO][4215] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" host="localhost" May 16 05:01:06.032045 containerd[1519]: 2025-05-16 05:01:06.014 [INFO][4215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" host="localhost" May 16 05:01:06.032045 containerd[1519]: 2025-05-16 05:01:06.014 [INFO][4215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 05:01:06.032045 containerd[1519]: 2025-05-16 05:01:06.014 [INFO][4215] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" HandleID="k8s-pod-network.b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" Workload="localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0" May 16 05:01:06.032151 containerd[1519]: 2025-05-16 05:01:06.016 [INFO][4200] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-r957p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0", GenerateName:"calico-apiserver-d7f886b9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"18d45456-22c3-4417-8b57-504e65cd40a3", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7f886b9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d7f886b9f-r957p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali904b9223ae7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:06.032198 containerd[1519]: 2025-05-16 05:01:06.016 [INFO][4200] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-r957p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0" May 16 05:01:06.032198 containerd[1519]: 2025-05-16 05:01:06.016 [INFO][4200] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali904b9223ae7 ContainerID="b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-r957p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0" May 16 05:01:06.032198 containerd[1519]: 2025-05-16 05:01:06.018 [INFO][4200] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-r957p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0" May 16 05:01:06.032370 containerd[1519]: 2025-05-16 05:01:06.019 [INFO][4200] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-r957p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0", GenerateName:"calico-apiserver-d7f886b9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"18d45456-22c3-4417-8b57-504e65cd40a3", ResourceVersion:"791", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7f886b9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7", Pod:"calico-apiserver-d7f886b9f-r957p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali904b9223ae7", MAC:"5a:9d:c6:df:21:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:06.032421 containerd[1519]: 2025-05-16 05:01:06.028 [INFO][4200] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-r957p" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--r957p-eth0" May 16 05:01:06.050909 containerd[1519]: time="2025-05-16T05:01:06.050839986Z" level=info msg="connecting to shim b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7" address="unix:///run/containerd/s/b7964fc362402526023cedaba164e39d5a2a015e5bd56436dba25cb1970f8491" namespace=k8s.io protocol=ttrpc version=3 May 16 05:01:06.076853 kubelet[2661]: E0516 05:01:06.076789 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-nwhgj" podUID="3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73" May 16 05:01:06.082402 systemd[1]: Started cri-containerd-b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7.scope - libcontainer container b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7. May 16 05:01:06.097925 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:01:06.117497 containerd[1519]: time="2025-05-16T05:01:06.117433256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7f886b9f-r957p,Uid:18d45456-22c3-4417-8b57-504e65cd40a3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7\"" May 16 05:01:06.129117 containerd[1519]: time="2025-05-16T05:01:06.129070100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 16 05:01:06.910564 containerd[1519]: time="2025-05-16T05:01:06.910500375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lvp4t,Uid:704fe550-765d-43cc-8dcb-1eabf4abbe0c,Namespace:kube-system,Attempt:0,}" May 16 05:01:06.911083 containerd[1519]: time="2025-05-16T05:01:06.911033368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f4f5c4487-mct8b,Uid:6f7a9ab8-1325-4d5f-862e-18057d14ccb1,Namespace:calico-system,Attempt:0,}" May 16 05:01:07.019971 systemd-networkd[1449]: calic95300cffae: Link UP May 16 05:01:07.020749 systemd-networkd[1449]: calic95300cffae: Gained carrier May 16 05:01:07.035737 containerd[1519]: 2025-05-16 05:01:06.951 [INFO][4284] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0 coredns-7c65d6cfc9- kube-system 704fe550-765d-43cc-8dcb-1eabf4abbe0c 785 0 2025-05-16 05:00:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-lvp4t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic95300cffae [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvp4t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lvp4t-" May 16 05:01:07.035737 containerd[1519]: 2025-05-16 05:01:06.951 [INFO][4284] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvp4t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0" May 16 05:01:07.035737 containerd[1519]: 2025-05-16 05:01:06.980 [INFO][4313] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" HandleID="k8s-pod-network.9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" Workload="localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0" May 16 05:01:07.036148 containerd[1519]: 2025-05-16 05:01:06.980 [INFO][4313] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" HandleID="k8s-pod-network.9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" Workload="localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e1620), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-lvp4t", "timestamp":"2025-05-16 05:01:06.980072045 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 05:01:07.036148 containerd[1519]: 2025-05-16 05:01:06.980 [INFO][4313] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 05:01:07.036148 containerd[1519]: 2025-05-16 05:01:06.980 [INFO][4313] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 05:01:07.036148 containerd[1519]: 2025-05-16 05:01:06.980 [INFO][4313] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 05:01:07.036148 containerd[1519]: 2025-05-16 05:01:06.989 [INFO][4313] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" host="localhost" May 16 05:01:07.036148 containerd[1519]: 2025-05-16 05:01:06.995 [INFO][4313] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 05:01:07.036148 containerd[1519]: 2025-05-16 05:01:07.000 [INFO][4313] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 05:01:07.036148 containerd[1519]: 2025-05-16 05:01:07.001 [INFO][4313] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 05:01:07.036148 containerd[1519]: 2025-05-16 05:01:07.004 [INFO][4313] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 05:01:07.036148 containerd[1519]: 2025-05-16 05:01:07.004 [INFO][4313] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" host="localhost" May 16 05:01:07.036482 containerd[1519]: 2025-05-16 05:01:07.005 [INFO][4313] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f May 16 05:01:07.036482 containerd[1519]: 2025-05-16 05:01:07.009 [INFO][4313] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" host="localhost" May 16 05:01:07.036482 containerd[1519]: 2025-05-16 05:01:07.014 [INFO][4313] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" host="localhost" May 16 05:01:07.036482 containerd[1519]: 2025-05-16 05:01:07.014 [INFO][4313] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" host="localhost" May 16 05:01:07.036482 containerd[1519]: 2025-05-16 05:01:07.014 [INFO][4313] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 05:01:07.036482 containerd[1519]: 2025-05-16 05:01:07.015 [INFO][4313] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" HandleID="k8s-pod-network.9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" Workload="localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0" May 16 05:01:07.036616 containerd[1519]: 2025-05-16 05:01:07.017 [INFO][4284] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvp4t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"704fe550-765d-43cc-8dcb-1eabf4abbe0c", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-lvp4t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic95300cffae", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:07.036679 containerd[1519]: 2025-05-16 05:01:07.017 [INFO][4284] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvp4t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0" May 16 05:01:07.036679 containerd[1519]: 2025-05-16 05:01:07.017 [INFO][4284] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic95300cffae ContainerID="9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvp4t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0" May 16 05:01:07.036679 containerd[1519]: 2025-05-16 05:01:07.021 [INFO][4284] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvp4t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0" May 16 05:01:07.036758 containerd[1519]: 2025-05-16 05:01:07.022 [INFO][4284] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvp4t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"704fe550-765d-43cc-8dcb-1eabf4abbe0c", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f", Pod:"coredns-7c65d6cfc9-lvp4t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic95300cffae", MAC:"42:0d:18:79:e1:4e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:07.036758 containerd[1519]: 2025-05-16 05:01:07.032 [INFO][4284] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" Namespace="kube-system" Pod="coredns-7c65d6cfc9-lvp4t" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--lvp4t-eth0" May 16 05:01:07.061688 containerd[1519]: time="2025-05-16T05:01:07.061629736Z" level=info msg="connecting to shim 9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f" address="unix:///run/containerd/s/39850668422544deab852b6074db519007f6d7ddadace040507ae1b528e0c6f7" namespace=k8s.io protocol=ttrpc version=3 May 16 05:01:07.098460 systemd[1]: Started cri-containerd-9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f.scope - libcontainer container 9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f. May 16 05:01:07.115850 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:01:07.139119 systemd-networkd[1449]: cali474672a979b: Link UP May 16 05:01:07.139352 systemd-networkd[1449]: cali474672a979b: Gained carrier May 16 05:01:07.152416 containerd[1519]: time="2025-05-16T05:01:07.151038373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lvp4t,Uid:704fe550-765d-43cc-8dcb-1eabf4abbe0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f\"" May 16 05:01:07.157653 containerd[1519]: time="2025-05-16T05:01:07.157615088Z" level=info msg="CreateContainer within sandbox \"9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:06.957 [INFO][4290] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0 calico-kube-controllers-f4f5c4487- calico-system 6f7a9ab8-1325-4d5f-862e-18057d14ccb1 789 0 2025-05-16 05:00:44 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:f4f5c4487 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-f4f5c4487-mct8b eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali474672a979b [] [] }} ContainerID="395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" Namespace="calico-system" Pod="calico-kube-controllers-f4f5c4487-mct8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:06.957 [INFO][4290] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" Namespace="calico-system" Pod="calico-kube-controllers-f4f5c4487-mct8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:06.988 [INFO][4320] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" HandleID="k8s-pod-network.395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" Workload="localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:06.988 [INFO][4320] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" HandleID="k8s-pod-network.395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" Workload="localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a9010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-f4f5c4487-mct8b", "timestamp":"2025-05-16 05:01:06.988588571 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:06.988 [INFO][4320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.015 [INFO][4320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.015 [INFO][4320] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.091 [INFO][4320] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" host="localhost" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.096 [INFO][4320] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.103 [INFO][4320] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.106 [INFO][4320] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.109 [INFO][4320] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.109 [INFO][4320] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" host="localhost" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.114 [INFO][4320] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.122 [INFO][4320] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" host="localhost" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.129 [INFO][4320] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" host="localhost" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.129 [INFO][4320] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" host="localhost" May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.129 [INFO][4320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 05:01:07.161417 containerd[1519]: 2025-05-16 05:01:07.129 [INFO][4320] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" HandleID="k8s-pod-network.395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" Workload="localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0" May 16 05:01:07.161898 containerd[1519]: 2025-05-16 05:01:07.132 [INFO][4290] cni-plugin/k8s.go 418: Populated endpoint ContainerID="395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" Namespace="calico-system" Pod="calico-kube-controllers-f4f5c4487-mct8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0", GenerateName:"calico-kube-controllers-f4f5c4487-", Namespace:"calico-system", SelfLink:"", UID:"6f7a9ab8-1325-4d5f-862e-18057d14ccb1", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f4f5c4487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-f4f5c4487-mct8b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali474672a979b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:07.161898 containerd[1519]: 2025-05-16 05:01:07.133 [INFO][4290] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" Namespace="calico-system" Pod="calico-kube-controllers-f4f5c4487-mct8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0" May 16 05:01:07.161898 containerd[1519]: 2025-05-16 05:01:07.133 [INFO][4290] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali474672a979b ContainerID="395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" Namespace="calico-system" Pod="calico-kube-controllers-f4f5c4487-mct8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0" May 16 05:01:07.161898 containerd[1519]: 2025-05-16 05:01:07.139 [INFO][4290] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" Namespace="calico-system" Pod="calico-kube-controllers-f4f5c4487-mct8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0" May 16 05:01:07.161898 containerd[1519]: 2025-05-16 05:01:07.140 [INFO][4290] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" Namespace="calico-system" Pod="calico-kube-controllers-f4f5c4487-mct8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0", GenerateName:"calico-kube-controllers-f4f5c4487-", Namespace:"calico-system", SelfLink:"", UID:"6f7a9ab8-1325-4d5f-862e-18057d14ccb1", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"f4f5c4487", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e", Pod:"calico-kube-controllers-f4f5c4487-mct8b", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali474672a979b", MAC:"b2:3d:d1:f9:42:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:07.161898 containerd[1519]: 2025-05-16 05:01:07.154 [INFO][4290] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" Namespace="calico-system" Pod="calico-kube-controllers-f4f5c4487-mct8b" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--f4f5c4487--mct8b-eth0" May 16 05:01:07.171495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount951032039.mount: Deactivated successfully. May 16 05:01:07.172079 containerd[1519]: time="2025-05-16T05:01:07.171995980Z" level=info msg="Container 26f37dfa4695b6a4244bcaec9875198523644f1859ead2e41167bccb795a2979: CDI devices from CRI Config.CDIDevices: []" May 16 05:01:07.174622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4024022073.mount: Deactivated successfully. May 16 05:01:07.191250 containerd[1519]: time="2025-05-16T05:01:07.191090572Z" level=info msg="connecting to shim 395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e" address="unix:///run/containerd/s/6a887013b7226740fb2d26273c91c853b76574beb73340e2e9eb78caf0d9e452" namespace=k8s.io protocol=ttrpc version=3 May 16 05:01:07.192925 containerd[1519]: time="2025-05-16T05:01:07.192885629Z" level=info msg="CreateContainer within sandbox \"9ba2a7391d7c4a1d601974956435429614e3baee3fc09c570f2ffd61fc0f559f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"26f37dfa4695b6a4244bcaec9875198523644f1859ead2e41167bccb795a2979\"" May 16 05:01:07.193537 containerd[1519]: time="2025-05-16T05:01:07.193513181Z" level=info msg="StartContainer for \"26f37dfa4695b6a4244bcaec9875198523644f1859ead2e41167bccb795a2979\"" May 16 05:01:07.194446 containerd[1519]: time="2025-05-16T05:01:07.194400689Z" level=info msg="connecting to shim 26f37dfa4695b6a4244bcaec9875198523644f1859ead2e41167bccb795a2979" address="unix:///run/containerd/s/39850668422544deab852b6074db519007f6d7ddadace040507ae1b528e0c6f7" protocol=ttrpc version=3 May 16 05:01:07.211413 systemd[1]: Started cri-containerd-26f37dfa4695b6a4244bcaec9875198523644f1859ead2e41167bccb795a2979.scope - libcontainer container 26f37dfa4695b6a4244bcaec9875198523644f1859ead2e41167bccb795a2979. May 16 05:01:07.236607 systemd[1]: Started cri-containerd-395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e.scope - libcontainer container 395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e. May 16 05:01:07.258852 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:01:07.268743 containerd[1519]: time="2025-05-16T05:01:07.268678083Z" level=info msg="StartContainer for \"26f37dfa4695b6a4244bcaec9875198523644f1859ead2e41167bccb795a2979\" returns successfully" May 16 05:01:07.316007 containerd[1519]: time="2025-05-16T05:01:07.315895548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-f4f5c4487-mct8b,Uid:6f7a9ab8-1325-4d5f-862e-18057d14ccb1,Namespace:calico-system,Attempt:0,} returns sandbox id \"395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e\"" May 16 05:01:07.899531 systemd-networkd[1449]: cali904b9223ae7: Gained IPv6LL May 16 05:01:07.910686 containerd[1519]: time="2025-05-16T05:01:07.910571652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5jxv4,Uid:e6af650f-1da2-49ae-aae7-f7207bc906df,Namespace:calico-system,Attempt:0,}" May 16 05:01:07.910916 containerd[1519]: time="2025-05-16T05:01:07.910571652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-22d75,Uid:eafc01ac-e2de-4f92-86b0-314fd75a243e,Namespace:kube-system,Attempt:0,}" May 16 05:01:08.010556 containerd[1519]: time="2025-05-16T05:01:08.010372317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:08.011346 containerd[1519]: time="2025-05-16T05:01:08.011242906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 16 05:01:08.013269 containerd[1519]: time="2025-05-16T05:01:08.012316252Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:08.014883 containerd[1519]: time="2025-05-16T05:01:08.014840260Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:08.015623 containerd[1519]: time="2025-05-16T05:01:08.015595251Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 1.886479791s" May 16 05:01:08.015675 containerd[1519]: time="2025-05-16T05:01:08.015628490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 16 05:01:08.017533 containerd[1519]: time="2025-05-16T05:01:08.017331749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 16 05:01:08.019153 containerd[1519]: time="2025-05-16T05:01:08.018976008Z" level=info msg="CreateContainer within sandbox \"b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 05:01:08.027261 containerd[1519]: time="2025-05-16T05:01:08.026885908Z" level=info msg="Container 1a58bc2d7a0fdbf9b6d3f872e29b342498f576c938538a66fa170b658ca8f49f: CDI devices from CRI Config.CDIDevices: []" May 16 05:01:08.035849 containerd[1519]: time="2025-05-16T05:01:08.035804995Z" level=info msg="CreateContainer within sandbox \"b81e8566e7dbc8142d5ddc3a46ba9cf8378c451fb4dc8bf84a6eaa5a7d214ea7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"1a58bc2d7a0fdbf9b6d3f872e29b342498f576c938538a66fa170b658ca8f49f\"" May 16 05:01:08.037066 containerd[1519]: time="2025-05-16T05:01:08.036390548Z" level=info msg="StartContainer for \"1a58bc2d7a0fdbf9b6d3f872e29b342498f576c938538a66fa170b658ca8f49f\"" May 16 05:01:08.039011 containerd[1519]: time="2025-05-16T05:01:08.038819397Z" level=info msg="connecting to shim 1a58bc2d7a0fdbf9b6d3f872e29b342498f576c938538a66fa170b658ca8f49f" address="unix:///run/containerd/s/b7964fc362402526023cedaba164e39d5a2a015e5bd56436dba25cb1970f8491" protocol=ttrpc version=3 May 16 05:01:08.051088 systemd-networkd[1449]: cali186829bc5fa: Link UP May 16 05:01:08.051296 systemd-networkd[1449]: cali186829bc5fa: Gained carrier May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:07.964 [INFO][4485] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5jxv4-eth0 csi-node-driver- calico-system e6af650f-1da2-49ae-aae7-f7207bc906df 659 0 2025-05-16 05:00:44 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5jxv4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali186829bc5fa [] [] }} ContainerID="43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" Namespace="calico-system" Pod="csi-node-driver-5jxv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5jxv4-" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:07.965 [INFO][4485] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" Namespace="calico-system" Pod="csi-node-driver-5jxv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5jxv4-eth0" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:07.992 [INFO][4500] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" HandleID="k8s-pod-network.43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" Workload="localhost-k8s-csi--node--driver--5jxv4-eth0" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:07.993 [INFO][4500] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" HandleID="k8s-pod-network.43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" Workload="localhost-k8s-csi--node--driver--5jxv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e1620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5jxv4", "timestamp":"2025-05-16 05:01:07.99296454 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:07.993 [INFO][4500] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:07.993 [INFO][4500] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:07.993 [INFO][4500] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.006 [INFO][4500] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" host="localhost" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.011 [INFO][4500] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.016 [INFO][4500] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.020 [INFO][4500] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.023 [INFO][4500] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.023 [INFO][4500] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" host="localhost" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.024 [INFO][4500] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9 May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.035 [INFO][4500] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" host="localhost" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.043 [INFO][4500] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" host="localhost" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.043 [INFO][4500] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" host="localhost" May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.043 [INFO][4500] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 05:01:08.072574 containerd[1519]: 2025-05-16 05:01:08.043 [INFO][4500] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" HandleID="k8s-pod-network.43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" Workload="localhost-k8s-csi--node--driver--5jxv4-eth0" May 16 05:01:08.073130 containerd[1519]: 2025-05-16 05:01:08.047 [INFO][4485] cni-plugin/k8s.go 418: Populated endpoint ContainerID="43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" Namespace="calico-system" Pod="csi-node-driver-5jxv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5jxv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5jxv4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e6af650f-1da2-49ae-aae7-f7207bc906df", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5jxv4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali186829bc5fa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:08.073130 containerd[1519]: 2025-05-16 05:01:08.047 [INFO][4485] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" Namespace="calico-system" Pod="csi-node-driver-5jxv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5jxv4-eth0" May 16 05:01:08.073130 containerd[1519]: 2025-05-16 05:01:08.047 [INFO][4485] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali186829bc5fa ContainerID="43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" Namespace="calico-system" Pod="csi-node-driver-5jxv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5jxv4-eth0" May 16 05:01:08.073130 containerd[1519]: 2025-05-16 05:01:08.051 [INFO][4485] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" Namespace="calico-system" Pod="csi-node-driver-5jxv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5jxv4-eth0" May 16 05:01:08.073130 containerd[1519]: 2025-05-16 05:01:08.054 [INFO][4485] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" Namespace="calico-system" Pod="csi-node-driver-5jxv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5jxv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5jxv4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"e6af650f-1da2-49ae-aae7-f7207bc906df", ResourceVersion:"659", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9", Pod:"csi-node-driver-5jxv4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali186829bc5fa", MAC:"92:f6:3e:ab:8b:ca", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:08.073130 containerd[1519]: 2025-05-16 05:01:08.070 [INFO][4485] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" Namespace="calico-system" Pod="csi-node-driver-5jxv4" WorkloadEndpoint="localhost-k8s-csi--node--driver--5jxv4-eth0" May 16 05:01:08.079416 systemd[1]: Started cri-containerd-1a58bc2d7a0fdbf9b6d3f872e29b342498f576c938538a66fa170b658ca8f49f.scope - libcontainer container 1a58bc2d7a0fdbf9b6d3f872e29b342498f576c938538a66fa170b658ca8f49f. May 16 05:01:08.110269 kubelet[2661]: I0516 05:01:08.109103 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lvp4t" podStartSLOduration=39.109083907 podStartE2EDuration="39.109083907s" podCreationTimestamp="2025-05-16 05:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:01:08.107873802 +0000 UTC m=+43.272158750" watchObservedRunningTime="2025-05-16 05:01:08.109083907 +0000 UTC m=+43.273368855" May 16 05:01:08.120242 containerd[1519]: time="2025-05-16T05:01:08.114066564Z" level=info msg="connecting to shim 43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9" address="unix:///run/containerd/s/1710d5fe778cc8ea6d6efaf894939dab80d03803ff222c1ef65e3d03c742a8ae" namespace=k8s.io protocol=ttrpc version=3 May 16 05:01:08.159293 systemd[1]: Started cri-containerd-43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9.scope - libcontainer container 43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9. May 16 05:01:08.174478 containerd[1519]: time="2025-05-16T05:01:08.174437600Z" level=info msg="StartContainer for \"1a58bc2d7a0fdbf9b6d3f872e29b342498f576c938538a66fa170b658ca8f49f\" returns successfully" May 16 05:01:08.180002 systemd-networkd[1449]: calif17fcf8629c: Link UP May 16 05:01:08.180625 systemd-networkd[1449]: calif17fcf8629c: Gained carrier May 16 05:01:08.190120 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.040 [INFO][4507] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7c65d6cfc9--22d75-eth0 coredns-7c65d6cfc9- kube-system eafc01ac-e2de-4f92-86b0-314fd75a243e 790 0 2025-05-16 05:00:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7c65d6cfc9-22d75 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif17fcf8629c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" Namespace="kube-system" Pod="coredns-7c65d6cfc9-22d75" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--22d75-" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.040 [INFO][4507] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" Namespace="kube-system" Pod="coredns-7c65d6cfc9-22d75" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--22d75-eth0" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.106 [INFO][4538] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" HandleID="k8s-pod-network.84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" Workload="localhost-k8s-coredns--7c65d6cfc9--22d75-eth0" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.106 [INFO][4538] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" HandleID="k8s-pod-network.84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" Workload="localhost-k8s-coredns--7c65d6cfc9--22d75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000503510), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7c65d6cfc9-22d75", "timestamp":"2025-05-16 05:01:08.106057985 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.106 [INFO][4538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.106 [INFO][4538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.106 [INFO][4538] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.127 [INFO][4538] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" host="localhost" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.135 [INFO][4538] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.144 [INFO][4538] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.148 [INFO][4538] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.153 [INFO][4538] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.153 [INFO][4538] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" host="localhost" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.155 [INFO][4538] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922 May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.164 [INFO][4538] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" host="localhost" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.173 [INFO][4538] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" host="localhost" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.173 [INFO][4538] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" host="localhost" May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.173 [INFO][4538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 05:01:08.198768 containerd[1519]: 2025-05-16 05:01:08.173 [INFO][4538] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" HandleID="k8s-pod-network.84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" Workload="localhost-k8s-coredns--7c65d6cfc9--22d75-eth0" May 16 05:01:08.199701 containerd[1519]: 2025-05-16 05:01:08.176 [INFO][4507] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" Namespace="kube-system" Pod="coredns-7c65d6cfc9-22d75" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--22d75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--22d75-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eafc01ac-e2de-4f92-86b0-314fd75a243e", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7c65d6cfc9-22d75", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif17fcf8629c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:08.199701 containerd[1519]: 2025-05-16 05:01:08.177 [INFO][4507] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" Namespace="kube-system" Pod="coredns-7c65d6cfc9-22d75" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--22d75-eth0" May 16 05:01:08.199701 containerd[1519]: 2025-05-16 05:01:08.177 [INFO][4507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif17fcf8629c ContainerID="84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" Namespace="kube-system" Pod="coredns-7c65d6cfc9-22d75" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--22d75-eth0" May 16 05:01:08.199701 containerd[1519]: 2025-05-16 05:01:08.181 [INFO][4507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" Namespace="kube-system" Pod="coredns-7c65d6cfc9-22d75" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--22d75-eth0" May 16 05:01:08.199701 containerd[1519]: 2025-05-16 05:01:08.181 [INFO][4507] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" Namespace="kube-system" Pod="coredns-7c65d6cfc9-22d75" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--22d75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7c65d6cfc9--22d75-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"eafc01ac-e2de-4f92-86b0-314fd75a243e", ResourceVersion:"790", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922", Pod:"coredns-7c65d6cfc9-22d75", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif17fcf8629c", MAC:"5a:a4:62:f4:8d:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:08.199701 containerd[1519]: 2025-05-16 05:01:08.192 [INFO][4507] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" Namespace="kube-system" Pod="coredns-7c65d6cfc9-22d75" WorkloadEndpoint="localhost-k8s-coredns--7c65d6cfc9--22d75-eth0" May 16 05:01:08.227389 containerd[1519]: time="2025-05-16T05:01:08.227352090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5jxv4,Uid:e6af650f-1da2-49ae-aae7-f7207bc906df,Namespace:calico-system,Attempt:0,} returns sandbox id \"43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9\"" May 16 05:01:08.246274 containerd[1519]: time="2025-05-16T05:01:08.245368661Z" level=info msg="connecting to shim 84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922" address="unix:///run/containerd/s/6741d00cbd7fd654c9ef16ab8236dcf5dd708f448be418abba45cee870118d93" namespace=k8s.io protocol=ttrpc version=3 May 16 05:01:08.282431 systemd[1]: Started cri-containerd-84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922.scope - libcontainer container 84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922. May 16 05:01:08.320416 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:01:08.340304 containerd[1519]: time="2025-05-16T05:01:08.340268940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-22d75,Uid:eafc01ac-e2de-4f92-86b0-314fd75a243e,Namespace:kube-system,Attempt:0,} returns sandbox id \"84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922\"" May 16 05:01:08.342961 containerd[1519]: time="2025-05-16T05:01:08.342929026Z" level=info msg="CreateContainer within sandbox \"84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 05:01:08.350820 containerd[1519]: time="2025-05-16T05:01:08.350781207Z" level=info msg="Container 1602d008ff2ee1887b34982cec048d11807335f0d9f5e09bdb492a1b8edeb054: CDI devices from CRI Config.CDIDevices: []" May 16 05:01:08.355552 containerd[1519]: time="2025-05-16T05:01:08.355501187Z" level=info msg="CreateContainer within sandbox \"84cf472c32a0e594068a9954ab620cd21f2bc25ceddc10f809336bf544a0a922\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1602d008ff2ee1887b34982cec048d11807335f0d9f5e09bdb492a1b8edeb054\"" May 16 05:01:08.356149 containerd[1519]: time="2025-05-16T05:01:08.356014100Z" level=info msg="StartContainer for \"1602d008ff2ee1887b34982cec048d11807335f0d9f5e09bdb492a1b8edeb054\"" May 16 05:01:08.357120 containerd[1519]: time="2025-05-16T05:01:08.357092447Z" level=info msg="connecting to shim 1602d008ff2ee1887b34982cec048d11807335f0d9f5e09bdb492a1b8edeb054" address="unix:///run/containerd/s/6741d00cbd7fd654c9ef16ab8236dcf5dd708f448be418abba45cee870118d93" protocol=ttrpc version=3 May 16 05:01:08.379388 systemd[1]: Started cri-containerd-1602d008ff2ee1887b34982cec048d11807335f0d9f5e09bdb492a1b8edeb054.scope - libcontainer container 1602d008ff2ee1887b34982cec048d11807335f0d9f5e09bdb492a1b8edeb054. May 16 05:01:08.406449 containerd[1519]: time="2025-05-16T05:01:08.406413662Z" level=info msg="StartContainer for \"1602d008ff2ee1887b34982cec048d11807335f0d9f5e09bdb492a1b8edeb054\" returns successfully" May 16 05:01:08.667351 systemd-networkd[1449]: calic95300cffae: Gained IPv6LL May 16 05:01:08.731367 systemd-networkd[1449]: cali474672a979b: Gained IPv6LL May 16 05:01:08.911434 containerd[1519]: time="2025-05-16T05:01:08.911391708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7f886b9f-6ssw6,Uid:103355f8-b44d-4322-8416-0a85611bc48f,Namespace:calico-apiserver,Attempt:0,}" May 16 05:01:09.026587 systemd-networkd[1449]: cali9ccbeb97643: Link UP May 16 05:01:09.026744 systemd-networkd[1449]: cali9ccbeb97643: Gained carrier May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:08.949 [INFO][4726] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0 calico-apiserver-d7f886b9f- calico-apiserver 103355f8-b44d-4322-8416-0a85611bc48f 787 0 2025-05-16 05:00:39 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d7f886b9f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-d7f886b9f-6ssw6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ccbeb97643 [] [] }} ContainerID="06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-6ssw6" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:08.949 [INFO][4726] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-6ssw6" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:08.978 [INFO][4739] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" HandleID="k8s-pod-network.06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" Workload="localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:08.978 [INFO][4739] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" HandleID="k8s-pod-network.06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" Workload="localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000338070), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-d7f886b9f-6ssw6", "timestamp":"2025-05-16 05:01:08.978123703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:08.978 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:08.978 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:08.978 [INFO][4739] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:08.987 [INFO][4739] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" host="localhost" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:08.992 [INFO][4739] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:08.997 [INFO][4739] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:09.000 [INFO][4739] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:09.003 [INFO][4739] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:09.003 [INFO][4739] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" host="localhost" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:09.005 [INFO][4739] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:09.009 [INFO][4739] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" host="localhost" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:09.017 [INFO][4739] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" host="localhost" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:09.017 [INFO][4739] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" host="localhost" May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:09.017 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 16 05:01:09.043754 containerd[1519]: 2025-05-16 05:01:09.017 [INFO][4739] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" HandleID="k8s-pod-network.06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" Workload="localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0" May 16 05:01:09.044743 containerd[1519]: 2025-05-16 05:01:09.021 [INFO][4726] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-6ssw6" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0", GenerateName:"calico-apiserver-d7f886b9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"103355f8-b44d-4322-8416-0a85611bc48f", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7f886b9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-d7f886b9f-6ssw6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ccbeb97643", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:09.044743 containerd[1519]: 2025-05-16 05:01:09.021 [INFO][4726] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-6ssw6" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0" May 16 05:01:09.044743 containerd[1519]: 2025-05-16 05:01:09.021 [INFO][4726] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ccbeb97643 ContainerID="06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-6ssw6" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0" May 16 05:01:09.044743 containerd[1519]: 2025-05-16 05:01:09.027 [INFO][4726] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-6ssw6" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0" May 16 05:01:09.044743 containerd[1519]: 2025-05-16 05:01:09.028 [INFO][4726] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-6ssw6" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0", GenerateName:"calico-apiserver-d7f886b9f-", Namespace:"calico-apiserver", SelfLink:"", UID:"103355f8-b44d-4322-8416-0a85611bc48f", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.May, 16, 5, 0, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d7f886b9f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc", Pod:"calico-apiserver-d7f886b9f-6ssw6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ccbeb97643", MAC:"a2:d6:36:1b:6f:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 16 05:01:09.044743 containerd[1519]: 2025-05-16 05:01:09.040 [INFO][4726] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" Namespace="calico-apiserver" Pod="calico-apiserver-d7f886b9f-6ssw6" WorkloadEndpoint="localhost-k8s-calico--apiserver--d7f886b9f--6ssw6-eth0" May 16 05:01:09.080407 containerd[1519]: time="2025-05-16T05:01:09.080361835Z" level=info msg="connecting to shim 06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc" address="unix:///run/containerd/s/1beb82ca1309a445ab56a97ae14c572e9df6a1fe2a44e0a7a2b8e4e9a973f5c8" namespace=k8s.io protocol=ttrpc version=3 May 16 05:01:09.112461 systemd[1]: Started cri-containerd-06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc.scope - libcontainer container 06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc. May 16 05:01:09.124325 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 05:01:09.158279 kubelet[2661]: I0516 05:01:09.158081 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d7f886b9f-r957p" podStartSLOduration=28.260412914 podStartE2EDuration="30.158062077s" podCreationTimestamp="2025-05-16 05:00:39 +0000 UTC" firstStartedPulling="2025-05-16 05:01:06.11936055 +0000 UTC m=+41.283645458" lastFinishedPulling="2025-05-16 05:01:08.017009713 +0000 UTC m=+43.181294621" observedRunningTime="2025-05-16 05:01:09.143760374 +0000 UTC m=+44.308045282" watchObservedRunningTime="2025-05-16 05:01:09.158062077 +0000 UTC m=+44.322347025" May 16 05:01:09.160374 containerd[1519]: time="2025-05-16T05:01:09.160333169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d7f886b9f-6ssw6,Uid:103355f8-b44d-4322-8416-0a85611bc48f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc\"" May 16 05:01:09.168883 containerd[1519]: time="2025-05-16T05:01:09.168786305Z" level=info msg="CreateContainer within sandbox \"06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 16 05:01:09.190975 containerd[1519]: time="2025-05-16T05:01:09.183751761Z" level=info msg="Container ccc7b04b478de398af7644001237da1f410f2b261122d7ac83b8c73dc1324964: CDI devices from CRI Config.CDIDevices: []" May 16 05:01:09.197609 containerd[1519]: time="2025-05-16T05:01:09.197569790Z" level=info msg="CreateContainer within sandbox \"06032e06d02b0687a99ed896a3dd7d2e0d22740f475a8bd228e6cbd2c09840cc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ccc7b04b478de398af7644001237da1f410f2b261122d7ac83b8c73dc1324964\"" May 16 05:01:09.199176 containerd[1519]: time="2025-05-16T05:01:09.199145211Z" level=info msg="StartContainer for \"ccc7b04b478de398af7644001237da1f410f2b261122d7ac83b8c73dc1324964\"" May 16 05:01:09.200241 containerd[1519]: time="2025-05-16T05:01:09.200199598Z" level=info msg="connecting to shim ccc7b04b478de398af7644001237da1f410f2b261122d7ac83b8c73dc1324964" address="unix:///run/containerd/s/1beb82ca1309a445ab56a97ae14c572e9df6a1fe2a44e0a7a2b8e4e9a973f5c8" protocol=ttrpc version=3 May 16 05:01:09.225407 systemd[1]: Started cri-containerd-ccc7b04b478de398af7644001237da1f410f2b261122d7ac83b8c73dc1324964.scope - libcontainer container ccc7b04b478de398af7644001237da1f410f2b261122d7ac83b8c73dc1324964. May 16 05:01:09.279067 containerd[1519]: time="2025-05-16T05:01:09.278959587Z" level=info msg="StartContainer for \"ccc7b04b478de398af7644001237da1f410f2b261122d7ac83b8c73dc1324964\" returns successfully" May 16 05:01:09.627506 systemd-networkd[1449]: calif17fcf8629c: Gained IPv6LL May 16 05:01:10.075462 systemd-networkd[1449]: cali186829bc5fa: Gained IPv6LL May 16 05:01:10.117336 kubelet[2661]: I0516 05:01:10.117301 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 05:01:10.144312 kubelet[2661]: I0516 05:01:10.143649 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d7f886b9f-6ssw6" podStartSLOduration=31.143631976 podStartE2EDuration="31.143631976s" podCreationTimestamp="2025-05-16 05:00:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:01:10.143485497 +0000 UTC m=+45.307770445" watchObservedRunningTime="2025-05-16 05:01:10.143631976 +0000 UTC m=+45.307916924" May 16 05:01:10.144471 kubelet[2661]: I0516 05:01:10.144377 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-22d75" podStartSLOduration=41.144365327 podStartE2EDuration="41.144365327s" podCreationTimestamp="2025-05-16 05:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 05:01:09.160932322 +0000 UTC m=+44.325217230" watchObservedRunningTime="2025-05-16 05:01:10.144365327 +0000 UTC m=+45.308650315" May 16 05:01:10.157449 containerd[1519]: time="2025-05-16T05:01:10.157338691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 16 05:01:10.157816 containerd[1519]: time="2025-05-16T05:01:10.157400090Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:10.159267 containerd[1519]: time="2025-05-16T05:01:10.158867913Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:10.162099 containerd[1519]: time="2025-05-16T05:01:10.162056034Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:10.164028 containerd[1519]: time="2025-05-16T05:01:10.163998451Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 2.146631462s" May 16 05:01:10.164028 containerd[1519]: time="2025-05-16T05:01:10.164031131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 16 05:01:10.166094 containerd[1519]: time="2025-05-16T05:01:10.165764110Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 16 05:01:10.171031 containerd[1519]: time="2025-05-16T05:01:10.170988607Z" level=info msg="CreateContainer within sandbox \"395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 16 05:01:10.186136 containerd[1519]: time="2025-05-16T05:01:10.186067786Z" level=info msg="Container 5de7336196e8e06eda0bacd786c89f452d63820acbb8c812d4352c2854c259d3: CDI devices from CRI Config.CDIDevices: []" May 16 05:01:10.196510 containerd[1519]: time="2025-05-16T05:01:10.196458622Z" level=info msg="CreateContainer within sandbox \"395dfca4d166cf913b9d4af08ad74d7869aacf6a2271f8593397d45a0d578d1e\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5de7336196e8e06eda0bacd786c89f452d63820acbb8c812d4352c2854c259d3\"" May 16 05:01:10.199169 containerd[1519]: time="2025-05-16T05:01:10.199129630Z" level=info msg="StartContainer for \"5de7336196e8e06eda0bacd786c89f452d63820acbb8c812d4352c2854c259d3\"" May 16 05:01:10.206253 containerd[1519]: time="2025-05-16T05:01:10.204830521Z" level=info msg="connecting to shim 5de7336196e8e06eda0bacd786c89f452d63820acbb8c812d4352c2854c259d3" address="unix:///run/containerd/s/6a887013b7226740fb2d26273c91c853b76574beb73340e2e9eb78caf0d9e452" protocol=ttrpc version=3 May 16 05:01:10.243427 systemd[1]: Started cri-containerd-5de7336196e8e06eda0bacd786c89f452d63820acbb8c812d4352c2854c259d3.scope - libcontainer container 5de7336196e8e06eda0bacd786c89f452d63820acbb8c812d4352c2854c259d3. May 16 05:01:10.297842 containerd[1519]: time="2025-05-16T05:01:10.297725886Z" level=info msg="StartContainer for \"5de7336196e8e06eda0bacd786c89f452d63820acbb8c812d4352c2854c259d3\" returns successfully" May 16 05:01:10.587601 systemd-networkd[1449]: cali9ccbeb97643: Gained IPv6LL May 16 05:01:10.750494 systemd[1]: Started sshd@8-10.0.0.27:22-10.0.0.1:43084.service - OpenSSH per-connection server daemon (10.0.0.1:43084). May 16 05:01:10.823261 sshd[4892]: Accepted publickey for core from 10.0.0.1 port 43084 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:10.824526 sshd-session[4892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:10.831460 systemd-logind[1505]: New session 9 of user core. May 16 05:01:10.843504 systemd[1]: Started session-9.scope - Session 9 of User core. May 16 05:01:11.111548 sshd[4894]: Connection closed by 10.0.0.1 port 43084 May 16 05:01:11.111085 sshd-session[4892]: pam_unix(sshd:session): session closed for user core May 16 05:01:11.114551 systemd[1]: sshd@8-10.0.0.27:22-10.0.0.1:43084.service: Deactivated successfully. May 16 05:01:11.116809 systemd[1]: session-9.scope: Deactivated successfully. May 16 05:01:11.119403 systemd-logind[1505]: Session 9 logged out. Waiting for processes to exit. May 16 05:01:11.121975 systemd-logind[1505]: Removed session 9. May 16 05:01:11.366316 containerd[1519]: time="2025-05-16T05:01:11.365963542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:11.367221 containerd[1519]: time="2025-05-16T05:01:11.367138688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 16 05:01:11.368328 containerd[1519]: time="2025-05-16T05:01:11.368302315Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:11.370281 containerd[1519]: time="2025-05-16T05:01:11.370224132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:11.371172 containerd[1519]: time="2025-05-16T05:01:11.370719527Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 1.204916017s" May 16 05:01:11.371172 containerd[1519]: time="2025-05-16T05:01:11.370751606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 16 05:01:11.373073 containerd[1519]: time="2025-05-16T05:01:11.373042859Z" level=info msg="CreateContainer within sandbox \"43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 16 05:01:11.401553 containerd[1519]: time="2025-05-16T05:01:11.401499847Z" level=info msg="Container 1b01bf5b9d89f335f427d6096585541b51d4890c054631328a6641736775b647: CDI devices from CRI Config.CDIDevices: []" May 16 05:01:11.418423 containerd[1519]: time="2025-05-16T05:01:11.418366610Z" level=info msg="CreateContainer within sandbox \"43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"1b01bf5b9d89f335f427d6096585541b51d4890c054631328a6641736775b647\"" May 16 05:01:11.419279 containerd[1519]: time="2025-05-16T05:01:11.419223640Z" level=info msg="StartContainer for \"1b01bf5b9d89f335f427d6096585541b51d4890c054631328a6641736775b647\"" May 16 05:01:11.420644 containerd[1519]: time="2025-05-16T05:01:11.420612704Z" level=info msg="connecting to shim 1b01bf5b9d89f335f427d6096585541b51d4890c054631328a6641736775b647" address="unix:///run/containerd/s/1710d5fe778cc8ea6d6efaf894939dab80d03803ff222c1ef65e3d03c742a8ae" protocol=ttrpc version=3 May 16 05:01:11.440412 systemd[1]: Started cri-containerd-1b01bf5b9d89f335f427d6096585541b51d4890c054631328a6641736775b647.scope - libcontainer container 1b01bf5b9d89f335f427d6096585541b51d4890c054631328a6641736775b647. May 16 05:01:11.476880 containerd[1519]: time="2025-05-16T05:01:11.476761047Z" level=info msg="StartContainer for \"1b01bf5b9d89f335f427d6096585541b51d4890c054631328a6641736775b647\" returns successfully" May 16 05:01:11.478143 containerd[1519]: time="2025-05-16T05:01:11.478112952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 16 05:01:12.127982 kubelet[2661]: I0516 05:01:12.127645 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 05:01:12.821374 containerd[1519]: time="2025-05-16T05:01:12.821326107Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:12.822152 containerd[1519]: time="2025-05-16T05:01:12.822117858Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 16 05:01:12.823255 containerd[1519]: time="2025-05-16T05:01:12.823155286Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:12.825606 containerd[1519]: time="2025-05-16T05:01:12.825576258Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 16 05:01:12.826125 containerd[1519]: time="2025-05-16T05:01:12.826055973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 1.347856062s" May 16 05:01:12.826125 containerd[1519]: time="2025-05-16T05:01:12.826087013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 16 05:01:12.828497 containerd[1519]: time="2025-05-16T05:01:12.828408626Z" level=info msg="CreateContainer within sandbox \"43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 16 05:01:12.836251 containerd[1519]: time="2025-05-16T05:01:12.835371467Z" level=info msg="Container 75452631573d4b672b972426306d6c57d7ca3e6f0e0c6edf4478e650dd22c3b9: CDI devices from CRI Config.CDIDevices: []" May 16 05:01:12.842430 containerd[1519]: time="2025-05-16T05:01:12.842368347Z" level=info msg="CreateContainer within sandbox \"43364675643179a64cffd2876d2ffe9f37de3a79a2efe2e95a46cd4bb9b4a7c9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"75452631573d4b672b972426306d6c57d7ca3e6f0e0c6edf4478e650dd22c3b9\"" May 16 05:01:12.843062 containerd[1519]: time="2025-05-16T05:01:12.843032020Z" level=info msg="StartContainer for \"75452631573d4b672b972426306d6c57d7ca3e6f0e0c6edf4478e650dd22c3b9\"" May 16 05:01:12.844886 containerd[1519]: time="2025-05-16T05:01:12.844859919Z" level=info msg="connecting to shim 75452631573d4b672b972426306d6c57d7ca3e6f0e0c6edf4478e650dd22c3b9" address="unix:///run/containerd/s/1710d5fe778cc8ea6d6efaf894939dab80d03803ff222c1ef65e3d03c742a8ae" protocol=ttrpc version=3 May 16 05:01:12.869458 systemd[1]: Started cri-containerd-75452631573d4b672b972426306d6c57d7ca3e6f0e0c6edf4478e650dd22c3b9.scope - libcontainer container 75452631573d4b672b972426306d6c57d7ca3e6f0e0c6edf4478e650dd22c3b9. May 16 05:01:12.905648 containerd[1519]: time="2025-05-16T05:01:12.905531949Z" level=info msg="StartContainer for \"75452631573d4b672b972426306d6c57d7ca3e6f0e0c6edf4478e650dd22c3b9\" returns successfully" May 16 05:01:12.912955 containerd[1519]: time="2025-05-16T05:01:12.912804426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 05:01:12.926818 kubelet[2661]: I0516 05:01:12.926741 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-f4f5c4487-mct8b" podStartSLOduration=26.07964171 podStartE2EDuration="28.926723067s" podCreationTimestamp="2025-05-16 05:00:44 +0000 UTC" firstStartedPulling="2025-05-16 05:01:07.317978121 +0000 UTC m=+42.482263069" lastFinishedPulling="2025-05-16 05:01:10.165059478 +0000 UTC m=+45.329344426" observedRunningTime="2025-05-16 05:01:11.135140079 +0000 UTC m=+46.299425027" watchObservedRunningTime="2025-05-16 05:01:12.926723067 +0000 UTC m=+48.091008015" May 16 05:01:13.016486 kubelet[2661]: I0516 05:01:13.016430 2661 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 16 05:01:13.018590 kubelet[2661]: I0516 05:01:13.018555 2661 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 16 05:01:13.061930 containerd[1519]: time="2025-05-16T05:01:13.061863948Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 05:01:13.063047 containerd[1519]: time="2025-05-16T05:01:13.062979855Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 05:01:13.063167 containerd[1519]: time="2025-05-16T05:01:13.063008775Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 05:01:13.063364 kubelet[2661]: E0516 05:01:13.063304 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 05:01:13.063450 kubelet[2661]: E0516 05:01:13.063377 2661 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 05:01:13.063582 kubelet[2661]: E0516 05:01:13.063504 2661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ef125e5441f1476585809f36c5a808ba,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdf96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56cbc7d65f-plq4q_calico-system(5f454e4d-8aa4-46c9-8e87-444cb605f063): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 05:01:13.066423 containerd[1519]: time="2025-05-16T05:01:13.066392057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 05:01:13.144411 kubelet[2661]: I0516 05:01:13.144275 2661 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5jxv4" podStartSLOduration=24.545916305 podStartE2EDuration="29.144256274s" podCreationTimestamp="2025-05-16 05:00:44 +0000 UTC" firstStartedPulling="2025-05-16 05:01:08.228496595 +0000 UTC m=+43.392781543" lastFinishedPulling="2025-05-16 05:01:12.826836604 +0000 UTC m=+47.991121512" observedRunningTime="2025-05-16 05:01:13.143461763 +0000 UTC m=+48.307746791" watchObservedRunningTime="2025-05-16 05:01:13.144256274 +0000 UTC m=+48.308541222" May 16 05:01:13.230975 containerd[1519]: time="2025-05-16T05:01:13.230916794Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 05:01:13.231898 containerd[1519]: time="2025-05-16T05:01:13.231858903Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 05:01:13.231984 containerd[1519]: time="2025-05-16T05:01:13.231916343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 05:01:13.232125 kubelet[2661]: E0516 05:01:13.232076 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 05:01:13.232213 kubelet[2661]: E0516 05:01:13.232131 2661 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 05:01:13.232318 kubelet[2661]: E0516 05:01:13.232266 2661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdf96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56cbc7d65f-plq4q_calico-system(5f454e4d-8aa4-46c9-8e87-444cb605f063): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 05:01:13.233591 kubelet[2661]: E0516 05:01:13.233536 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-56cbc7d65f-plq4q" podUID="5f454e4d-8aa4-46c9-8e87-444cb605f063" May 16 05:01:14.124974 kubelet[2661]: I0516 05:01:14.124786 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 05:01:14.248666 containerd[1519]: time="2025-05-16T05:01:14.248601305Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b0736176feb65d7621fd07a159d2f2154bb9bc66274d1423b3aaa31b63facd7\" id:\"f955f189f0ea325f47d2313d238e77989c962f07e7f2cc4b0f1313a8880c9625\" pid:5001 exited_at:{seconds:1747371674 nanos:246264850}" May 16 05:01:14.338625 containerd[1519]: time="2025-05-16T05:01:14.338584534Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b0736176feb65d7621fd07a159d2f2154bb9bc66274d1423b3aaa31b63facd7\" id:\"7a46e600934dc8adda906ef336fd64078ea827c405bc771a58a432623d5963ea\" pid:5025 exited_at:{seconds:1747371674 nanos:338308737}" May 16 05:01:16.126728 systemd[1]: Started sshd@9-10.0.0.27:22-10.0.0.1:40262.service - OpenSSH per-connection server daemon (10.0.0.1:40262). May 16 05:01:16.182302 sshd[5049]: Accepted publickey for core from 10.0.0.1 port 40262 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:16.183827 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:16.187702 systemd-logind[1505]: New session 10 of user core. May 16 05:01:16.198404 systemd[1]: Started session-10.scope - Session 10 of User core. May 16 05:01:16.260727 kubelet[2661]: I0516 05:01:16.260610 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 05:01:16.384267 sshd[5051]: Connection closed by 10.0.0.1 port 40262 May 16 05:01:16.384916 sshd-session[5049]: pam_unix(sshd:session): session closed for user core May 16 05:01:16.398451 systemd[1]: sshd@9-10.0.0.27:22-10.0.0.1:40262.service: Deactivated successfully. May 16 05:01:16.400324 systemd[1]: session-10.scope: Deactivated successfully. May 16 05:01:16.401183 systemd-logind[1505]: Session 10 logged out. Waiting for processes to exit. May 16 05:01:16.403969 systemd[1]: Started sshd@10-10.0.0.27:22-10.0.0.1:40266.service - OpenSSH per-connection server daemon (10.0.0.1:40266). May 16 05:01:16.404883 systemd-logind[1505]: Removed session 10. May 16 05:01:16.452874 sshd[5070]: Accepted publickey for core from 10.0.0.1 port 40266 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:16.453627 sshd-session[5070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:16.457974 systemd-logind[1505]: New session 11 of user core. May 16 05:01:16.470407 systemd[1]: Started session-11.scope - Session 11 of User core. May 16 05:01:16.670489 sshd[5072]: Connection closed by 10.0.0.1 port 40266 May 16 05:01:16.670735 sshd-session[5070]: pam_unix(sshd:session): session closed for user core May 16 05:01:16.684356 systemd[1]: sshd@10-10.0.0.27:22-10.0.0.1:40266.service: Deactivated successfully. May 16 05:01:16.689523 systemd[1]: session-11.scope: Deactivated successfully. May 16 05:01:16.692530 systemd-logind[1505]: Session 11 logged out. Waiting for processes to exit. May 16 05:01:16.697830 systemd[1]: Started sshd@11-10.0.0.27:22-10.0.0.1:40272.service - OpenSSH per-connection server daemon (10.0.0.1:40272). May 16 05:01:16.700127 systemd-logind[1505]: Removed session 11. May 16 05:01:16.745027 sshd[5084]: Accepted publickey for core from 10.0.0.1 port 40272 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:16.746262 sshd-session[5084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:16.750098 systemd-logind[1505]: New session 12 of user core. May 16 05:01:16.756363 systemd[1]: Started session-12.scope - Session 12 of User core. May 16 05:01:16.886205 sshd[5086]: Connection closed by 10.0.0.1 port 40272 May 16 05:01:16.886547 sshd-session[5084]: pam_unix(sshd:session): session closed for user core May 16 05:01:16.890448 systemd[1]: sshd@11-10.0.0.27:22-10.0.0.1:40272.service: Deactivated successfully. May 16 05:01:16.892401 systemd[1]: session-12.scope: Deactivated successfully. May 16 05:01:16.893124 systemd-logind[1505]: Session 12 logged out. Waiting for processes to exit. May 16 05:01:16.895690 systemd-logind[1505]: Removed session 12. May 16 05:01:19.910599 containerd[1519]: time="2025-05-16T05:01:19.910532735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 16 05:01:20.089811 containerd[1519]: time="2025-05-16T05:01:20.089757055Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 05:01:20.090816 containerd[1519]: time="2025-05-16T05:01:20.090769605Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 16 05:01:20.090913 containerd[1519]: time="2025-05-16T05:01:20.090837205Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 16 05:01:20.091046 kubelet[2661]: E0516 05:01:20.090992 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 05:01:20.091046 kubelet[2661]: E0516 05:01:20.091047 2661 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 16 05:01:20.091706 kubelet[2661]: E0516 05:01:20.091167 2661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xcx28,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-8f77d7b6c-nwhgj_calico-system(3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 05:01:20.092791 kubelet[2661]: E0516 05:01:20.092751 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-8f77d7b6c-nwhgj" podUID="3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73" May 16 05:01:21.898864 systemd[1]: Started sshd@12-10.0.0.27:22-10.0.0.1:40282.service - OpenSSH per-connection server daemon (10.0.0.1:40282). May 16 05:01:21.930549 kubelet[2661]: I0516 05:01:21.930494 2661 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 05:01:21.950961 sshd[5108]: Accepted publickey for core from 10.0.0.1 port 40282 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:21.951939 sshd-session[5108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:21.961052 systemd-logind[1505]: New session 13 of user core. May 16 05:01:21.966372 systemd[1]: Started session-13.scope - Session 13 of User core. May 16 05:01:21.971250 containerd[1519]: time="2025-05-16T05:01:21.971188225Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5de7336196e8e06eda0bacd786c89f452d63820acbb8c812d4352c2854c259d3\" id:\"b6d38afd185752dab52b9007e2c7ec1cd0933b8a0eec9512d286d0eacad1726d\" pid:5124 exited_at:{seconds:1747371681 nanos:970934548}" May 16 05:01:22.008451 containerd[1519]: time="2025-05-16T05:01:22.008411091Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5de7336196e8e06eda0bacd786c89f452d63820acbb8c812d4352c2854c259d3\" id:\"c46eb4048e927b774a1aca0c9152186c95cac01b6cd0b1fb4358b0db53af1071\" pid:5147 exited_at:{seconds:1747371682 nanos:7866656}" May 16 05:01:22.136736 sshd[5130]: Connection closed by 10.0.0.1 port 40282 May 16 05:01:22.137073 sshd-session[5108]: pam_unix(sshd:session): session closed for user core May 16 05:01:22.151904 systemd[1]: sshd@12-10.0.0.27:22-10.0.0.1:40282.service: Deactivated successfully. May 16 05:01:22.155093 systemd[1]: session-13.scope: Deactivated successfully. May 16 05:01:22.155897 systemd-logind[1505]: Session 13 logged out. Waiting for processes to exit. May 16 05:01:22.158225 systemd[1]: Started sshd@13-10.0.0.27:22-10.0.0.1:40298.service - OpenSSH per-connection server daemon (10.0.0.1:40298). May 16 05:01:22.159531 systemd-logind[1505]: Removed session 13. May 16 05:01:22.211875 sshd[5169]: Accepted publickey for core from 10.0.0.1 port 40298 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:22.213070 sshd-session[5169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:22.216791 systemd-logind[1505]: New session 14 of user core. May 16 05:01:22.223450 systemd[1]: Started session-14.scope - Session 14 of User core. May 16 05:01:22.463855 sshd[5172]: Connection closed by 10.0.0.1 port 40298 May 16 05:01:22.464406 sshd-session[5169]: pam_unix(sshd:session): session closed for user core May 16 05:01:22.476296 systemd[1]: sshd@13-10.0.0.27:22-10.0.0.1:40298.service: Deactivated successfully. May 16 05:01:22.478426 systemd[1]: session-14.scope: Deactivated successfully. May 16 05:01:22.479167 systemd-logind[1505]: Session 14 logged out. Waiting for processes to exit. May 16 05:01:22.481590 systemd[1]: Started sshd@14-10.0.0.27:22-10.0.0.1:39494.service - OpenSSH per-connection server daemon (10.0.0.1:39494). May 16 05:01:22.482314 systemd-logind[1505]: Removed session 14. May 16 05:01:22.529107 sshd[5183]: Accepted publickey for core from 10.0.0.1 port 39494 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:22.529909 sshd-session[5183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:22.533967 systemd-logind[1505]: New session 15 of user core. May 16 05:01:22.543448 systemd[1]: Started session-15.scope - Session 15 of User core. May 16 05:01:23.910576 kubelet[2661]: E0516 05:01:23.910530 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\"\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\"\"]" pod="calico-system/whisker-56cbc7d65f-plq4q" podUID="5f454e4d-8aa4-46c9-8e87-444cb605f063" May 16 05:01:24.187333 sshd[5185]: Connection closed by 10.0.0.1 port 39494 May 16 05:01:24.187117 sshd-session[5183]: pam_unix(sshd:session): session closed for user core May 16 05:01:24.198185 systemd[1]: sshd@14-10.0.0.27:22-10.0.0.1:39494.service: Deactivated successfully. May 16 05:01:24.203525 systemd[1]: session-15.scope: Deactivated successfully. May 16 05:01:24.203914 systemd[1]: session-15.scope: Consumed 531ms CPU time, 78.7M memory peak. May 16 05:01:24.205250 systemd-logind[1505]: Session 15 logged out. Waiting for processes to exit. May 16 05:01:24.214509 systemd[1]: Started sshd@15-10.0.0.27:22-10.0.0.1:39504.service - OpenSSH per-connection server daemon (10.0.0.1:39504). May 16 05:01:24.215029 systemd-logind[1505]: Removed session 15. May 16 05:01:24.282942 sshd[5206]: Accepted publickey for core from 10.0.0.1 port 39504 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:24.284404 sshd-session[5206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:24.289132 systemd-logind[1505]: New session 16 of user core. May 16 05:01:24.296399 systemd[1]: Started session-16.scope - Session 16 of User core. May 16 05:01:24.587740 sshd[5208]: Connection closed by 10.0.0.1 port 39504 May 16 05:01:24.588084 sshd-session[5206]: pam_unix(sshd:session): session closed for user core May 16 05:01:24.597696 systemd[1]: sshd@15-10.0.0.27:22-10.0.0.1:39504.service: Deactivated successfully. May 16 05:01:24.600435 systemd[1]: session-16.scope: Deactivated successfully. May 16 05:01:24.602183 systemd-logind[1505]: Session 16 logged out. Waiting for processes to exit. May 16 05:01:24.606489 systemd[1]: Started sshd@16-10.0.0.27:22-10.0.0.1:39512.service - OpenSSH per-connection server daemon (10.0.0.1:39512). May 16 05:01:24.607457 systemd-logind[1505]: Removed session 16. May 16 05:01:24.657047 sshd[5219]: Accepted publickey for core from 10.0.0.1 port 39512 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:24.658271 sshd-session[5219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:24.663679 systemd-logind[1505]: New session 17 of user core. May 16 05:01:24.679420 systemd[1]: Started session-17.scope - Session 17 of User core. May 16 05:01:24.804981 sshd[5221]: Connection closed by 10.0.0.1 port 39512 May 16 05:01:24.805453 sshd-session[5219]: pam_unix(sshd:session): session closed for user core May 16 05:01:24.808830 systemd[1]: sshd@16-10.0.0.27:22-10.0.0.1:39512.service: Deactivated successfully. May 16 05:01:24.810490 systemd[1]: session-17.scope: Deactivated successfully. May 16 05:01:24.813302 systemd-logind[1505]: Session 17 logged out. Waiting for processes to exit. May 16 05:01:24.815817 systemd-logind[1505]: Removed session 17. May 16 05:01:29.816659 systemd[1]: Started sshd@17-10.0.0.27:22-10.0.0.1:39514.service - OpenSSH per-connection server daemon (10.0.0.1:39514). May 16 05:01:29.861664 sshd[5240]: Accepted publickey for core from 10.0.0.1 port 39514 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:29.862767 sshd-session[5240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:29.866570 systemd-logind[1505]: New session 18 of user core. May 16 05:01:29.882479 systemd[1]: Started session-18.scope - Session 18 of User core. May 16 05:01:30.004339 sshd[5242]: Connection closed by 10.0.0.1 port 39514 May 16 05:01:30.004652 sshd-session[5240]: pam_unix(sshd:session): session closed for user core May 16 05:01:30.007984 systemd[1]: sshd@17-10.0.0.27:22-10.0.0.1:39514.service: Deactivated successfully. May 16 05:01:30.009693 systemd[1]: session-18.scope: Deactivated successfully. May 16 05:01:30.010396 systemd-logind[1505]: Session 18 logged out. Waiting for processes to exit. May 16 05:01:30.011637 systemd-logind[1505]: Removed session 18. May 16 05:01:35.016784 systemd[1]: Started sshd@18-10.0.0.27:22-10.0.0.1:53720.service - OpenSSH per-connection server daemon (10.0.0.1:53720). May 16 05:01:35.063859 sshd[5262]: Accepted publickey for core from 10.0.0.1 port 53720 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:35.064977 sshd-session[5262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:35.068966 systemd-logind[1505]: New session 19 of user core. May 16 05:01:35.079404 systemd[1]: Started session-19.scope - Session 19 of User core. May 16 05:01:35.228804 sshd[5264]: Connection closed by 10.0.0.1 port 53720 May 16 05:01:35.228255 sshd-session[5262]: pam_unix(sshd:session): session closed for user core May 16 05:01:35.231957 systemd[1]: sshd@18-10.0.0.27:22-10.0.0.1:53720.service: Deactivated successfully. May 16 05:01:35.236860 systemd[1]: session-19.scope: Deactivated successfully. May 16 05:01:35.238476 systemd-logind[1505]: Session 19 logged out. Waiting for processes to exit. May 16 05:01:35.239795 systemd-logind[1505]: Removed session 19. May 16 05:01:35.912812 kubelet[2661]: E0516 05:01:35.912468 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\"\"" pod="calico-system/goldmane-8f77d7b6c-nwhgj" podUID="3a8dd0e7-85f8-4a75-8ec5-c5d246bfec73" May 16 05:01:36.918238 containerd[1519]: time="2025-05-16T05:01:36.918189234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 16 05:01:37.067447 containerd[1519]: time="2025-05-16T05:01:37.067400179Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 05:01:37.068409 containerd[1519]: time="2025-05-16T05:01:37.068297093Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 16 05:01:37.068409 containerd[1519]: time="2025-05-16T05:01:37.068366773Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 16 05:01:37.068546 kubelet[2661]: E0516 05:01:37.068500 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 05:01:37.068913 kubelet[2661]: E0516 05:01:37.068548 2661 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 16 05:01:37.068913 kubelet[2661]: E0516 05:01:37.068646 2661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ef125e5441f1476585809f36c5a808ba,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xdf96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56cbc7d65f-plq4q_calico-system(5f454e4d-8aa4-46c9-8e87-444cb605f063): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 05:01:37.071819 containerd[1519]: time="2025-05-16T05:01:37.071770072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 16 05:01:37.236064 containerd[1519]: time="2025-05-16T05:01:37.236026375Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 16 05:01:37.236927 containerd[1519]: time="2025-05-16T05:01:37.236894890Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 16 05:01:37.237064 containerd[1519]: time="2025-05-16T05:01:37.236959569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 16 05:01:37.237138 kubelet[2661]: E0516 05:01:37.237103 2661 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 05:01:37.237193 kubelet[2661]: E0516 05:01:37.237148 2661 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 16 05:01:37.237306 kubelet[2661]: E0516 05:01:37.237269 2661 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xdf96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-56cbc7d65f-plq4q_calico-system(5f454e4d-8aa4-46c9-8e87-444cb605f063): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 16 05:01:37.238774 kubelet[2661]: E0516 05:01:37.238716 2661 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-56cbc7d65f-plq4q" podUID="5f454e4d-8aa4-46c9-8e87-444cb605f063" May 16 05:01:40.244350 systemd[1]: Started sshd@19-10.0.0.27:22-10.0.0.1:53724.service - OpenSSH per-connection server daemon (10.0.0.1:53724). May 16 05:01:40.294851 sshd[5279]: Accepted publickey for core from 10.0.0.1 port 53724 ssh2: RSA SHA256:pIcj68sll2HTyzDhHl/6CkZXzTFU6nUsBwi/oAOdAIE May 16 05:01:40.295968 sshd-session[5279]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 16 05:01:40.300192 systemd-logind[1505]: New session 20 of user core. May 16 05:01:40.308412 systemd[1]: Started session-20.scope - Session 20 of User core. May 16 05:01:40.433054 sshd[5281]: Connection closed by 10.0.0.1 port 53724 May 16 05:01:40.433392 sshd-session[5279]: pam_unix(sshd:session): session closed for user core May 16 05:01:40.438912 systemd[1]: sshd@19-10.0.0.27:22-10.0.0.1:53724.service: Deactivated successfully. May 16 05:01:40.440737 systemd[1]: session-20.scope: Deactivated successfully. May 16 05:01:40.441948 systemd-logind[1505]: Session 20 logged out. Waiting for processes to exit. May 16 05:01:40.444456 systemd-logind[1505]: Removed session 20.