May 27 02:52:26.810159 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 27 02:52:26.810180 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 01:20:04 -00 2025 May 27 02:52:26.810190 kernel: KASLR enabled May 27 02:52:26.810196 kernel: efi: EFI v2.7 by EDK II May 27 02:52:26.810201 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 May 27 02:52:26.810207 kernel: random: crng init done May 27 02:52:26.810214 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 27 02:52:26.810220 kernel: secureboot: Secure boot enabled May 27 02:52:26.810226 kernel: ACPI: Early table checksum verification disabled May 27 02:52:26.810240 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) May 27 02:52:26.810246 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 27 02:52:26.810252 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:52:26.810258 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:52:26.810264 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:52:26.810271 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:52:26.810279 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:52:26.810285 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:52:26.810291 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:52:26.810297 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:52:26.810303 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:52:26.810322 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 27 02:52:26.810329 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 02:52:26.810335 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 27 02:52:26.810341 kernel: NODE_DATA(0) allocated [mem 0xdc737dc0-0xdc73efff] May 27 02:52:26.810347 kernel: Zone ranges: May 27 02:52:26.810355 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 27 02:52:26.810361 kernel: DMA32 empty May 27 02:52:26.810367 kernel: Normal empty May 27 02:52:26.810373 kernel: Device empty May 27 02:52:26.810379 kernel: Movable zone start for each node May 27 02:52:26.810385 kernel: Early memory node ranges May 27 02:52:26.810391 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] May 27 02:52:26.810397 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] May 27 02:52:26.810403 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] May 27 02:52:26.810409 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] May 27 02:52:26.810415 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] May 27 02:52:26.810421 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] May 27 02:52:26.810429 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] May 27 02:52:26.810435 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 27 02:52:26.810441 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 27 02:52:26.810450 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 27 02:52:26.810456 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 27 02:52:26.810463 kernel: psci: probing for conduit method from ACPI. May 27 02:52:26.810469 kernel: psci: PSCIv1.1 detected in firmware. May 27 02:52:26.810477 kernel: psci: Using standard PSCI v0.2 function IDs May 27 02:52:26.810483 kernel: psci: Trusted OS migration not required May 27 02:52:26.810490 kernel: psci: SMC Calling Convention v1.1 May 27 02:52:26.810496 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 27 02:52:26.810503 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 02:52:26.810510 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 02:52:26.810516 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 27 02:52:26.810523 kernel: Detected PIPT I-cache on CPU0 May 27 02:52:26.810529 kernel: CPU features: detected: GIC system register CPU interface May 27 02:52:26.810537 kernel: CPU features: detected: Spectre-v4 May 27 02:52:26.810543 kernel: CPU features: detected: Spectre-BHB May 27 02:52:26.810550 kernel: CPU features: kernel page table isolation forced ON by KASLR May 27 02:52:26.810556 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 27 02:52:26.810563 kernel: CPU features: detected: ARM erratum 1418040 May 27 02:52:26.810569 kernel: CPU features: detected: SSBS not fully self-synchronizing May 27 02:52:26.810576 kernel: alternatives: applying boot alternatives May 27 02:52:26.810583 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 02:52:26.810590 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 02:52:26.810597 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 02:52:26.810603 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 02:52:26.810611 kernel: Fallback order for Node 0: 0 May 27 02:52:26.810618 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 27 02:52:26.810624 kernel: Policy zone: DMA May 27 02:52:26.810630 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 02:52:26.810637 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 27 02:52:26.810643 kernel: software IO TLB: area num 4. May 27 02:52:26.810650 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 27 02:52:26.810656 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) May 27 02:52:26.810663 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 02:52:26.810669 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 02:52:26.810676 kernel: rcu: RCU event tracing is enabled. May 27 02:52:26.810683 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 02:52:26.810691 kernel: Trampoline variant of Tasks RCU enabled. May 27 02:52:26.810697 kernel: Tracing variant of Tasks RCU enabled. May 27 02:52:26.810704 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 02:52:26.810710 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 02:52:26.810717 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 02:52:26.810724 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 02:52:26.810730 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 02:52:26.810736 kernel: GICv3: 256 SPIs implemented May 27 02:52:26.810743 kernel: GICv3: 0 Extended SPIs implemented May 27 02:52:26.810749 kernel: Root IRQ handler: gic_handle_irq May 27 02:52:26.810756 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 27 02:52:26.810763 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 27 02:52:26.810770 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 27 02:52:26.810776 kernel: ITS [mem 0x08080000-0x0809ffff] May 27 02:52:26.810783 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) May 27 02:52:26.810790 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) May 27 02:52:26.810796 kernel: GICv3: using LPI property table @0x00000000400f0000 May 27 02:52:26.810802 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 27 02:52:26.810809 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 02:52:26.810815 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:52:26.810822 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 27 02:52:26.810834 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 27 02:52:26.810842 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 27 02:52:26.810850 kernel: arm-pv: using stolen time PV May 27 02:52:26.810857 kernel: Console: colour dummy device 80x25 May 27 02:52:26.810864 kernel: ACPI: Core revision 20240827 May 27 02:52:26.810871 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 27 02:52:26.810877 kernel: pid_max: default: 32768 minimum: 301 May 27 02:52:26.810884 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 02:52:26.810891 kernel: landlock: Up and running. May 27 02:52:26.810897 kernel: SELinux: Initializing. May 27 02:52:26.810904 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 02:52:26.810912 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 02:52:26.810919 kernel: rcu: Hierarchical SRCU implementation. May 27 02:52:26.810925 kernel: rcu: Max phase no-delay instances is 400. May 27 02:52:26.810932 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 02:52:26.810939 kernel: Remapping and enabling EFI services. May 27 02:52:26.810945 kernel: smp: Bringing up secondary CPUs ... May 27 02:52:26.810952 kernel: Detected PIPT I-cache on CPU1 May 27 02:52:26.810959 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 27 02:52:26.810965 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 27 02:52:26.810974 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:52:26.810985 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 27 02:52:26.810992 kernel: Detected PIPT I-cache on CPU2 May 27 02:52:26.811000 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 27 02:52:26.811007 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 27 02:52:26.811014 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:52:26.811027 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 27 02:52:26.811036 kernel: Detected PIPT I-cache on CPU3 May 27 02:52:26.811043 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 27 02:52:26.811052 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 27 02:52:26.811059 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:52:26.811065 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 27 02:52:26.811072 kernel: smp: Brought up 1 node, 4 CPUs May 27 02:52:26.811079 kernel: SMP: Total of 4 processors activated. May 27 02:52:26.811086 kernel: CPU: All CPU(s) started at EL1 May 27 02:52:26.811093 kernel: CPU features: detected: 32-bit EL0 Support May 27 02:52:26.811100 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 27 02:52:26.811109 kernel: CPU features: detected: Common not Private translations May 27 02:52:26.811116 kernel: CPU features: detected: CRC32 instructions May 27 02:52:26.811123 kernel: CPU features: detected: Enhanced Virtualization Traps May 27 02:52:26.811130 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 27 02:52:26.811137 kernel: CPU features: detected: LSE atomic instructions May 27 02:52:26.811144 kernel: CPU features: detected: Privileged Access Never May 27 02:52:26.811151 kernel: CPU features: detected: RAS Extension Support May 27 02:52:26.811158 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 27 02:52:26.811165 kernel: alternatives: applying system-wide alternatives May 27 02:52:26.811172 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 27 02:52:26.811181 kernel: Memory: 2438884K/2572288K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 127636K reserved, 0K cma-reserved) May 27 02:52:26.811188 kernel: devtmpfs: initialized May 27 02:52:26.811195 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 02:52:26.811202 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 02:52:26.811209 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 27 02:52:26.811216 kernel: 0 pages in range for non-PLT usage May 27 02:52:26.811223 kernel: 508544 pages in range for PLT usage May 27 02:52:26.811230 kernel: pinctrl core: initialized pinctrl subsystem May 27 02:52:26.811237 kernel: SMBIOS 3.0.0 present. May 27 02:52:26.811245 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 27 02:52:26.811252 kernel: DMI: Memory slots populated: 1/1 May 27 02:52:26.811262 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 02:52:26.811269 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 02:52:26.811276 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 02:52:26.811283 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 02:52:26.811290 kernel: audit: initializing netlink subsys (disabled) May 27 02:52:26.811297 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 May 27 02:52:26.811304 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 02:52:26.811321 kernel: cpuidle: using governor menu May 27 02:52:26.811328 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 02:52:26.811335 kernel: ASID allocator initialised with 32768 entries May 27 02:52:26.811342 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 02:52:26.811349 kernel: Serial: AMBA PL011 UART driver May 27 02:52:26.811356 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 02:52:26.811363 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 02:52:26.811370 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 02:52:26.811379 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 02:52:26.811386 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 02:52:26.811393 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 02:52:26.811400 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 02:52:26.811407 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 02:52:26.811413 kernel: ACPI: Added _OSI(Module Device) May 27 02:52:26.811420 kernel: ACPI: Added _OSI(Processor Device) May 27 02:52:26.811427 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 02:52:26.811434 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 02:52:26.811441 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 02:52:26.811450 kernel: ACPI: Interpreter enabled May 27 02:52:26.811457 kernel: ACPI: Using GIC for interrupt routing May 27 02:52:26.811464 kernel: ACPI: MCFG table detected, 1 entries May 27 02:52:26.811471 kernel: ACPI: CPU0 has been hot-added May 27 02:52:26.811478 kernel: ACPI: CPU1 has been hot-added May 27 02:52:26.811485 kernel: ACPI: CPU2 has been hot-added May 27 02:52:26.811491 kernel: ACPI: CPU3 has been hot-added May 27 02:52:26.811498 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 27 02:52:26.811505 kernel: printk: legacy console [ttyAMA0] enabled May 27 02:52:26.811514 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 02:52:26.811654 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 02:52:26.811722 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 27 02:52:26.811784 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 27 02:52:26.811858 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 27 02:52:26.811924 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 27 02:52:26.811934 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 27 02:52:26.811945 kernel: PCI host bridge to bus 0000:00 May 27 02:52:26.812014 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 27 02:52:26.812074 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 27 02:52:26.812130 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 27 02:52:26.812190 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 02:52:26.812273 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 27 02:52:26.812409 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 02:52:26.812481 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 27 02:52:26.812543 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 27 02:52:26.812605 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 27 02:52:26.812664 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 27 02:52:26.812725 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 27 02:52:26.812787 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 27 02:52:26.812854 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 27 02:52:26.812910 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 27 02:52:26.812975 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 27 02:52:26.812987 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 27 02:52:26.812995 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 27 02:52:26.813002 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 27 02:52:26.813009 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 27 02:52:26.813016 kernel: iommu: Default domain type: Translated May 27 02:52:26.813025 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 02:52:26.813032 kernel: efivars: Registered efivars operations May 27 02:52:26.813040 kernel: vgaarb: loaded May 27 02:52:26.813047 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 02:52:26.813054 kernel: VFS: Disk quotas dquot_6.6.0 May 27 02:52:26.813061 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 02:52:26.813068 kernel: pnp: PnP ACPI init May 27 02:52:26.813143 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 27 02:52:26.813153 kernel: pnp: PnP ACPI: found 1 devices May 27 02:52:26.813162 kernel: NET: Registered PF_INET protocol family May 27 02:52:26.813169 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 02:52:26.813176 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 02:52:26.813183 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 02:52:26.813190 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 02:52:26.813197 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 02:52:26.813204 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 02:52:26.813211 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 02:52:26.813218 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 02:52:26.813226 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 02:52:26.813233 kernel: PCI: CLS 0 bytes, default 64 May 27 02:52:26.813240 kernel: kvm [1]: HYP mode not available May 27 02:52:26.813247 kernel: Initialise system trusted keyrings May 27 02:52:26.813254 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 02:52:26.813261 kernel: Key type asymmetric registered May 27 02:52:26.813268 kernel: Asymmetric key parser 'x509' registered May 27 02:52:26.813275 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 02:52:26.813282 kernel: io scheduler mq-deadline registered May 27 02:52:26.813290 kernel: io scheduler kyber registered May 27 02:52:26.813297 kernel: io scheduler bfq registered May 27 02:52:26.813304 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 27 02:52:26.813320 kernel: ACPI: button: Power Button [PWRB] May 27 02:52:26.813327 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 27 02:52:26.813397 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 27 02:52:26.813407 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 02:52:26.813414 kernel: thunder_xcv, ver 1.0 May 27 02:52:26.813421 kernel: thunder_bgx, ver 1.0 May 27 02:52:26.813430 kernel: nicpf, ver 1.0 May 27 02:52:26.813437 kernel: nicvf, ver 1.0 May 27 02:52:26.813506 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 02:52:26.813563 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T02:52:26 UTC (1748314346) May 27 02:52:26.813573 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 02:52:26.813580 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 27 02:52:26.813587 kernel: watchdog: NMI not fully supported May 27 02:52:26.813594 kernel: watchdog: Hard watchdog permanently disabled May 27 02:52:26.813603 kernel: NET: Registered PF_INET6 protocol family May 27 02:52:26.813610 kernel: Segment Routing with IPv6 May 27 02:52:26.813617 kernel: In-situ OAM (IOAM) with IPv6 May 27 02:52:26.813623 kernel: NET: Registered PF_PACKET protocol family May 27 02:52:26.813631 kernel: Key type dns_resolver registered May 27 02:52:26.813638 kernel: registered taskstats version 1 May 27 02:52:26.813645 kernel: Loading compiled-in X.509 certificates May 27 02:52:26.813652 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 6bbf5412ef1f8a32378a640b6d048f74e6d74df0' May 27 02:52:26.813659 kernel: Demotion targets for Node 0: null May 27 02:52:26.813667 kernel: Key type .fscrypt registered May 27 02:52:26.813674 kernel: Key type fscrypt-provisioning registered May 27 02:52:26.813695 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 02:52:26.813702 kernel: ima: Allocated hash algorithm: sha1 May 27 02:52:26.813709 kernel: ima: No architecture policies found May 27 02:52:26.813717 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 02:52:26.813724 kernel: clk: Disabling unused clocks May 27 02:52:26.813731 kernel: PM: genpd: Disabling unused power domains May 27 02:52:26.813738 kernel: Warning: unable to open an initial console. May 27 02:52:26.813746 kernel: Freeing unused kernel memory: 39424K May 27 02:52:26.813753 kernel: Run /init as init process May 27 02:52:26.813760 kernel: with arguments: May 27 02:52:26.813767 kernel: /init May 27 02:52:26.813773 kernel: with environment: May 27 02:52:26.813780 kernel: HOME=/ May 27 02:52:26.813787 kernel: TERM=linux May 27 02:52:26.813793 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 02:52:26.813801 systemd[1]: Successfully made /usr/ read-only. May 27 02:52:26.813813 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 02:52:26.813821 systemd[1]: Detected virtualization kvm. May 27 02:52:26.813834 systemd[1]: Detected architecture arm64. May 27 02:52:26.813843 systemd[1]: Running in initrd. May 27 02:52:26.813850 systemd[1]: No hostname configured, using default hostname. May 27 02:52:26.813857 systemd[1]: Hostname set to . May 27 02:52:26.813864 systemd[1]: Initializing machine ID from VM UUID. May 27 02:52:26.813874 systemd[1]: Queued start job for default target initrd.target. May 27 02:52:26.813881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:52:26.813889 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:52:26.813896 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 02:52:26.813904 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 02:52:26.813912 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 02:52:26.813920 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 02:52:26.813929 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 02:52:26.813937 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 02:52:26.813945 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:52:26.813952 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 02:52:26.813960 systemd[1]: Reached target paths.target - Path Units. May 27 02:52:26.813967 systemd[1]: Reached target slices.target - Slice Units. May 27 02:52:26.813974 systemd[1]: Reached target swap.target - Swaps. May 27 02:52:26.813982 systemd[1]: Reached target timers.target - Timer Units. May 27 02:52:26.813991 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 02:52:26.813999 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 02:52:26.814006 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 02:52:26.814013 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 02:52:26.814021 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 02:52:26.814028 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 02:52:26.814036 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:52:26.814043 systemd[1]: Reached target sockets.target - Socket Units. May 27 02:52:26.814052 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 02:52:26.814060 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 02:52:26.814068 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 02:52:26.814076 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 02:52:26.814083 systemd[1]: Starting systemd-fsck-usr.service... May 27 02:52:26.814091 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 02:52:26.814098 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 02:52:26.814105 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:52:26.814112 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 02:52:26.814122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:52:26.814129 systemd[1]: Finished systemd-fsck-usr.service. May 27 02:52:26.814137 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 02:52:26.814167 systemd-journald[243]: Collecting audit messages is disabled. May 27 02:52:26.814188 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:52:26.814196 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 02:52:26.814204 systemd-journald[243]: Journal started May 27 02:52:26.814224 systemd-journald[243]: Runtime Journal (/run/log/journal/66251be7b2c4462eb6e29c6979158aaf) is 6M, max 48.5M, 42.4M free. May 27 02:52:26.808411 systemd-modules-load[244]: Inserted module 'overlay' May 27 02:52:26.819343 systemd[1]: Started systemd-journald.service - Journal Service. May 27 02:52:26.826407 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 02:52:26.829578 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 02:52:26.830414 systemd-modules-load[244]: Inserted module 'br_netfilter' May 27 02:52:26.831026 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 02:52:26.832954 kernel: Bridge firewalling registered May 27 02:52:26.832701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 02:52:26.841476 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 02:52:26.845423 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:52:26.846623 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 02:52:26.847136 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 02:52:26.847984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:52:26.850821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:52:26.853032 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:52:26.856241 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 02:52:26.858702 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 02:52:26.876579 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 02:52:26.889333 systemd-resolved[287]: Positive Trust Anchors: May 27 02:52:26.889350 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 02:52:26.889381 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 02:52:26.894188 systemd-resolved[287]: Defaulting to hostname 'linux'. May 27 02:52:26.895383 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 02:52:26.897757 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 02:52:26.950345 kernel: SCSI subsystem initialized May 27 02:52:26.954351 kernel: Loading iSCSI transport class v2.0-870. May 27 02:52:26.962390 kernel: iscsi: registered transport (tcp) May 27 02:52:26.974505 kernel: iscsi: registered transport (qla4xxx) May 27 02:52:26.974542 kernel: QLogic iSCSI HBA Driver May 27 02:52:26.990592 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 02:52:27.015373 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:52:27.017926 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 02:52:27.064341 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 02:52:27.066419 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 02:52:27.124333 kernel: raid6: neonx8 gen() 15732 MB/s May 27 02:52:27.141325 kernel: raid6: neonx4 gen() 15789 MB/s May 27 02:52:27.158322 kernel: raid6: neonx2 gen() 13187 MB/s May 27 02:52:27.175325 kernel: raid6: neonx1 gen() 10536 MB/s May 27 02:52:27.192334 kernel: raid6: int64x8 gen() 6881 MB/s May 27 02:52:27.209324 kernel: raid6: int64x4 gen() 7311 MB/s May 27 02:52:27.226324 kernel: raid6: int64x2 gen() 6080 MB/s May 27 02:52:27.243334 kernel: raid6: int64x1 gen() 5033 MB/s May 27 02:52:27.243359 kernel: raid6: using algorithm neonx4 gen() 15789 MB/s May 27 02:52:27.260336 kernel: raid6: .... xor() 12355 MB/s, rmw enabled May 27 02:52:27.260357 kernel: raid6: using neon recovery algorithm May 27 02:52:27.265329 kernel: xor: measuring software checksum speed May 27 02:52:27.265357 kernel: 8regs : 20892 MB/sec May 27 02:52:27.266706 kernel: 32regs : 19797 MB/sec May 27 02:52:27.266718 kernel: arm64_neon : 26356 MB/sec May 27 02:52:27.266728 kernel: xor: using function: arm64_neon (26356 MB/sec) May 27 02:52:27.319336 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 02:52:27.325719 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 02:52:27.328180 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:52:27.351559 systemd-udevd[498]: Using default interface naming scheme 'v255'. May 27 02:52:27.355557 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:52:27.357801 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 02:52:27.380366 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation May 27 02:52:27.400779 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 02:52:27.402933 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 02:52:27.450009 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:52:27.452515 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 02:52:27.496914 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 27 02:52:27.497092 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 02:52:27.504571 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 02:52:27.504620 kernel: GPT:9289727 != 19775487 May 27 02:52:27.504631 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 02:52:27.505690 kernel: GPT:9289727 != 19775487 May 27 02:52:27.505711 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 02:52:27.506444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 02:52:27.506658 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 02:52:27.506777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:52:27.509128 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:52:27.511429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:52:27.536503 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 02:52:27.537896 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:52:27.544419 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 02:52:27.556429 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 02:52:27.563495 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 02:52:27.569230 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 02:52:27.570123 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 02:52:27.572093 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 02:52:27.574607 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:52:27.576303 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 02:52:27.578636 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 02:52:27.580236 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 02:52:27.608782 disk-uuid[593]: Primary Header is updated. May 27 02:52:27.608782 disk-uuid[593]: Secondary Entries is updated. May 27 02:52:27.608782 disk-uuid[593]: Secondary Header is updated. May 27 02:52:27.611532 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 02:52:27.613269 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 02:52:28.624334 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 02:52:28.625275 disk-uuid[597]: The operation has completed successfully. May 27 02:52:28.647990 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 02:52:28.648106 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 02:52:28.676029 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 02:52:28.704006 sh[613]: Success May 27 02:52:28.722819 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 02:52:28.722863 kernel: device-mapper: uevent: version 1.0.3 May 27 02:52:28.724404 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 02:52:28.735338 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 02:52:28.767947 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 02:52:28.770617 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 02:52:28.786628 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 02:52:28.793164 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 02:52:28.793198 kernel: BTRFS: device fsid 5c6341ea-4eb5-44b6-ac57-c4d29847e384 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (625) May 27 02:52:28.794951 kernel: BTRFS info (device dm-0): first mount of filesystem 5c6341ea-4eb5-44b6-ac57-c4d29847e384 May 27 02:52:28.794984 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 02:52:28.794994 kernel: BTRFS info (device dm-0): using free-space-tree May 27 02:52:28.798486 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 02:52:28.799752 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 02:52:28.801223 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 02:52:28.802046 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 02:52:28.803573 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 02:52:28.832326 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (656) May 27 02:52:28.834675 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:52:28.834709 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 02:52:28.834727 kernel: BTRFS info (device vda6): using free-space-tree May 27 02:52:28.841325 kernel: BTRFS info (device vda6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:52:28.842616 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 02:52:28.844852 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 02:52:28.911107 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 02:52:28.914454 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 02:52:28.965920 systemd-networkd[797]: lo: Link UP May 27 02:52:28.965931 systemd-networkd[797]: lo: Gained carrier May 27 02:52:28.966694 systemd-networkd[797]: Enumeration completed May 27 02:52:28.966807 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 02:52:28.967593 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:52:28.967598 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 02:52:28.968324 systemd[1]: Reached target network.target - Network. May 27 02:52:28.968978 systemd-networkd[797]: eth0: Link UP May 27 02:52:28.968983 systemd-networkd[797]: eth0: Gained carrier May 27 02:52:28.968994 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:52:28.985356 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 02:52:28.992425 ignition[702]: Ignition 2.21.0 May 27 02:52:28.992439 ignition[702]: Stage: fetch-offline May 27 02:52:28.992475 ignition[702]: no configs at "/usr/lib/ignition/base.d" May 27 02:52:28.992485 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:52:28.992676 ignition[702]: parsed url from cmdline: "" May 27 02:52:28.992681 ignition[702]: no config URL provided May 27 02:52:28.992686 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" May 27 02:52:28.992692 ignition[702]: no config at "/usr/lib/ignition/user.ign" May 27 02:52:28.992710 ignition[702]: op(1): [started] loading QEMU firmware config module May 27 02:52:28.992714 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 02:52:28.999316 ignition[702]: op(1): [finished] loading QEMU firmware config module May 27 02:52:29.038388 ignition[702]: parsing config with SHA512: 6c21d9cd22db3be6e225cdb770693f3e8ae2de6a46eb9f8b5a7f6d98ccf46987f8b723f2b658776fe2af38bbe210b5442556085bddba5ffe0401e9083fa2e671 May 27 02:52:29.044604 unknown[702]: fetched base config from "system" May 27 02:52:29.044617 unknown[702]: fetched user config from "qemu" May 27 02:52:29.045040 ignition[702]: fetch-offline: fetch-offline passed May 27 02:52:29.045094 ignition[702]: Ignition finished successfully May 27 02:52:29.050386 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 02:52:29.051656 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 02:52:29.052437 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 02:52:29.082923 ignition[812]: Ignition 2.21.0 May 27 02:52:29.082940 ignition[812]: Stage: kargs May 27 02:52:29.083070 ignition[812]: no configs at "/usr/lib/ignition/base.d" May 27 02:52:29.083079 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:52:29.084417 ignition[812]: kargs: kargs passed May 27 02:52:29.084486 ignition[812]: Ignition finished successfully May 27 02:52:29.086454 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 02:52:29.089184 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 02:52:29.120464 ignition[820]: Ignition 2.21.0 May 27 02:52:29.120480 ignition[820]: Stage: disks May 27 02:52:29.121465 ignition[820]: no configs at "/usr/lib/ignition/base.d" May 27 02:52:29.121487 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:52:29.122655 ignition[820]: disks: disks passed May 27 02:52:29.122712 ignition[820]: Ignition finished successfully May 27 02:52:29.124275 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 02:52:29.126148 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 02:52:29.127715 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 02:52:29.129695 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 02:52:29.131575 systemd[1]: Reached target sysinit.target - System Initialization. May 27 02:52:29.133290 systemd[1]: Reached target basic.target - Basic System. May 27 02:52:29.135829 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 02:52:29.168605 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 02:52:29.172618 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 02:52:29.174680 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 02:52:29.240271 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 02:52:29.241742 kernel: EXT4-fs (vda9): mounted filesystem 5656cec4-efbd-4a2d-be98-2263e6ae16bd r/w with ordered data mode. Quota mode: none. May 27 02:52:29.241473 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 02:52:29.244409 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 02:52:29.246495 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 02:52:29.247428 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 02:52:29.247467 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 02:52:29.247489 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 02:52:29.261753 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 02:52:29.264561 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 02:52:29.268870 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (838) May 27 02:52:29.268899 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:52:29.268909 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 02:52:29.270328 kernel: BTRFS info (device vda6): using free-space-tree May 27 02:52:29.273303 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 02:52:29.307187 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory May 27 02:52:29.311344 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory May 27 02:52:29.315368 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory May 27 02:52:29.318203 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory May 27 02:52:29.394691 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 02:52:29.397118 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 02:52:29.398777 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 02:52:29.418329 kernel: BTRFS info (device vda6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:52:29.432436 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 02:52:29.445149 ignition[951]: INFO : Ignition 2.21.0 May 27 02:52:29.445149 ignition[951]: INFO : Stage: mount May 27 02:52:29.447133 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:52:29.447133 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:52:29.449770 ignition[951]: INFO : mount: mount passed May 27 02:52:29.449770 ignition[951]: INFO : Ignition finished successfully May 27 02:52:29.450519 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 02:52:29.452587 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 02:52:29.799245 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 02:52:29.800776 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 02:52:29.821325 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (964) May 27 02:52:29.821368 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:52:29.822784 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 02:52:29.822809 kernel: BTRFS info (device vda6): using free-space-tree May 27 02:52:29.825622 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 02:52:29.857568 ignition[981]: INFO : Ignition 2.21.0 May 27 02:52:29.857568 ignition[981]: INFO : Stage: files May 27 02:52:29.859181 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:52:29.859181 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:52:29.859181 ignition[981]: DEBUG : files: compiled without relabeling support, skipping May 27 02:52:29.862301 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 02:52:29.862301 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 02:52:29.865056 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 02:52:29.865056 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 02:52:29.865056 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 02:52:29.864250 unknown[981]: wrote ssh authorized keys file for user: core May 27 02:52:29.869962 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 27 02:52:29.869962 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 May 27 02:52:29.917846 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 02:52:30.059776 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" May 27 02:52:30.059776 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 27 02:52:30.063795 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 27 02:52:30.063795 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 02:52:30.063795 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 02:52:30.063795 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 02:52:30.063795 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 02:52:30.063795 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 02:52:30.063795 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 02:52:30.063795 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 02:52:30.063795 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 02:52:30.063795 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 02:52:30.080346 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 02:52:30.080346 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 02:52:30.080346 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 27 02:52:30.611193 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 27 02:52:30.926547 systemd-networkd[797]: eth0: Gained IPv6LL May 27 02:52:31.114531 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 27 02:52:31.114531 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 27 02:52:31.118886 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 02:52:31.118886 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 02:52:31.118886 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 27 02:52:31.118886 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" May 27 02:52:31.118886 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 02:52:31.118886 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 02:52:31.118886 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" May 27 02:52:31.118886 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" May 27 02:52:31.132492 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 02:52:31.135301 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 02:52:31.137454 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" May 27 02:52:31.137454 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" May 27 02:52:31.137454 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" May 27 02:52:31.137454 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 02:52:31.137454 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 02:52:31.137454 ignition[981]: INFO : files: files passed May 27 02:52:31.137454 ignition[981]: INFO : Ignition finished successfully May 27 02:52:31.139115 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 02:52:31.142590 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 02:52:31.144920 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 02:52:31.158357 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory May 27 02:52:31.158709 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 02:52:31.158784 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 02:52:31.164286 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 02:52:31.164286 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 02:52:31.168878 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 02:52:31.164881 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 02:52:31.167568 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 02:52:31.171470 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 02:52:31.205805 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 02:52:31.205909 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 02:52:31.208055 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 02:52:31.209859 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 02:52:31.211571 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 02:52:31.212303 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 02:52:31.242453 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 02:52:31.245933 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 02:52:31.263558 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 02:52:31.264465 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:52:31.266180 systemd[1]: Stopped target timers.target - Timer Units. May 27 02:52:31.267727 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 02:52:31.267848 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 02:52:31.270110 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 02:52:31.271798 systemd[1]: Stopped target basic.target - Basic System. May 27 02:52:31.273157 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 02:52:31.274565 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 02:52:31.276139 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 02:52:31.277795 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 02:52:31.279414 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 02:52:31.280995 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 02:52:31.282607 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 02:52:31.284242 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 02:52:31.285709 systemd[1]: Stopped target swap.target - Swaps. May 27 02:52:31.286988 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 02:52:31.287110 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 02:52:31.289079 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 02:52:31.289939 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:52:31.291529 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 02:52:31.292365 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:52:31.293249 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 02:52:31.293397 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 02:52:31.295700 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 02:52:31.295810 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 02:52:31.297846 systemd[1]: Stopped target paths.target - Path Units. May 27 02:52:31.299094 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 02:52:31.299201 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:52:31.300767 systemd[1]: Stopped target slices.target - Slice Units. May 27 02:52:31.302163 systemd[1]: Stopped target sockets.target - Socket Units. May 27 02:52:31.303728 systemd[1]: iscsid.socket: Deactivated successfully. May 27 02:52:31.303811 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 02:52:31.305085 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 02:52:31.305160 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 02:52:31.306682 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 02:52:31.306796 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 02:52:31.308695 systemd[1]: ignition-files.service: Deactivated successfully. May 27 02:52:31.308797 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 02:52:31.310854 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 02:52:31.312148 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 02:52:31.312273 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:52:31.322631 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 02:52:31.323441 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 02:52:31.323579 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:52:31.325345 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 02:52:31.325454 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 02:52:31.332863 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 02:52:31.333898 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 02:52:31.336158 ignition[1038]: INFO : Ignition 2.21.0 May 27 02:52:31.336158 ignition[1038]: INFO : Stage: umount May 27 02:52:31.336158 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:52:31.336158 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:52:31.341414 ignition[1038]: INFO : umount: umount passed May 27 02:52:31.341414 ignition[1038]: INFO : Ignition finished successfully May 27 02:52:31.337161 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 02:52:31.339199 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 02:52:31.339301 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 02:52:31.342264 systemd[1]: Stopped target network.target - Network. May 27 02:52:31.343418 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 02:52:31.343478 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 02:52:31.344859 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 02:52:31.344901 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 02:52:31.346235 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 02:52:31.346274 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 02:52:31.347707 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 02:52:31.347747 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 02:52:31.349272 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 02:52:31.350500 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 02:52:31.357042 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 02:52:31.357157 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 02:52:31.359952 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 02:52:31.360183 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 02:52:31.360220 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:52:31.363423 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 02:52:31.363655 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 02:52:31.363771 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 02:52:31.368076 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 02:52:31.368440 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 02:52:31.369902 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 02:52:31.369941 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 02:52:31.372253 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 02:52:31.372940 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 02:52:31.372993 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 02:52:31.374838 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 02:52:31.374889 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 02:52:31.378200 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 02:52:31.378245 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 02:52:31.380052 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:52:31.383571 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 02:52:31.392768 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 02:52:31.393927 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 02:52:31.396394 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 02:52:31.397410 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 02:52:31.401004 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 02:52:31.401102 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 02:52:31.404620 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 02:52:31.404732 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:52:31.406732 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 02:52:31.406781 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 02:52:31.408137 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 02:52:31.408166 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:52:31.410275 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 02:52:31.410331 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 02:52:31.413115 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 02:52:31.413160 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 02:52:31.415841 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 02:52:31.415894 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 02:52:31.419334 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 02:52:31.420644 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 02:52:31.420704 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:52:31.423589 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 02:52:31.423628 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:52:31.426928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 02:52:31.426968 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:52:31.436824 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 02:52:31.436919 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 02:52:31.439150 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 02:52:31.441600 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 02:52:31.469762 systemd[1]: Switching root. May 27 02:52:31.506044 systemd-journald[243]: Journal stopped May 27 02:52:32.224536 systemd-journald[243]: Received SIGTERM from PID 1 (systemd). May 27 02:52:32.224581 kernel: SELinux: policy capability network_peer_controls=1 May 27 02:52:32.224595 kernel: SELinux: policy capability open_perms=1 May 27 02:52:32.224605 kernel: SELinux: policy capability extended_socket_class=1 May 27 02:52:32.224614 kernel: SELinux: policy capability always_check_network=0 May 27 02:52:32.224626 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 02:52:32.224635 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 02:52:32.224647 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 02:52:32.224657 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 02:52:32.224668 kernel: SELinux: policy capability userspace_initial_context=0 May 27 02:52:32.224677 kernel: audit: type=1403 audit(1748314351.662:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 02:52:32.224686 systemd[1]: Successfully loaded SELinux policy in 43.033ms. May 27 02:52:32.224709 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.367ms. May 27 02:52:32.224721 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 02:52:32.224731 systemd[1]: Detected virtualization kvm. May 27 02:52:32.224742 systemd[1]: Detected architecture arm64. May 27 02:52:32.224751 systemd[1]: Detected first boot. May 27 02:52:32.224761 systemd[1]: Initializing machine ID from VM UUID. May 27 02:52:32.224773 zram_generator::config[1084]: No configuration found. May 27 02:52:32.224783 kernel: NET: Registered PF_VSOCK protocol family May 27 02:52:32.224792 systemd[1]: Populated /etc with preset unit settings. May 27 02:52:32.224804 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 02:52:32.224823 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 02:52:32.224835 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 02:52:32.224845 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 02:52:32.224855 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 02:52:32.224865 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 02:52:32.224875 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 02:52:32.224884 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 02:52:32.224894 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 02:52:32.224906 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 02:52:32.224916 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 02:52:32.224927 systemd[1]: Created slice user.slice - User and Session Slice. May 27 02:52:32.224937 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:52:32.224947 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:52:32.224958 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 02:52:32.224968 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 02:52:32.224979 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 02:52:32.224990 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 02:52:32.225000 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 27 02:52:32.225010 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:52:32.225020 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 02:52:32.225029 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 02:52:32.225040 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 02:52:32.225050 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 02:52:32.225059 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 02:52:32.225071 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:52:32.225081 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 02:52:32.225091 systemd[1]: Reached target slices.target - Slice Units. May 27 02:52:32.225100 systemd[1]: Reached target swap.target - Swaps. May 27 02:52:32.225110 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 02:52:32.225120 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 02:52:32.225130 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 02:52:32.225140 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 02:52:32.225150 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 02:52:32.225161 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:52:32.225171 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 02:52:32.225180 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 02:52:32.225190 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 02:52:32.225200 systemd[1]: Mounting media.mount - External Media Directory... May 27 02:52:32.225209 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 02:52:32.225219 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 02:52:32.225229 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 02:52:32.225239 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 02:52:32.225251 systemd[1]: Reached target machines.target - Containers. May 27 02:52:32.225261 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 02:52:32.225270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:52:32.225281 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 02:52:32.225291 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 02:52:32.225301 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:52:32.225328 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 02:52:32.225338 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:52:32.225351 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 02:52:32.225360 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:52:32.225371 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 02:52:32.225381 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 02:52:32.225390 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 02:52:32.225400 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 02:52:32.225411 systemd[1]: Stopped systemd-fsck-usr.service. May 27 02:52:32.225421 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:52:32.225430 kernel: loop: module loaded May 27 02:52:32.225441 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 02:52:32.225451 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 02:52:32.225460 kernel: fuse: init (API version 7.41) May 27 02:52:32.225470 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 02:52:32.225479 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 02:52:32.225489 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 02:52:32.225499 kernel: ACPI: bus type drm_connector registered May 27 02:52:32.225509 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 02:52:32.225519 systemd[1]: verity-setup.service: Deactivated successfully. May 27 02:52:32.225529 systemd[1]: Stopped verity-setup.service. May 27 02:52:32.225539 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 02:52:32.225548 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 02:52:32.225558 systemd[1]: Mounted media.mount - External Media Directory. May 27 02:52:32.225567 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 02:52:32.225578 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 02:52:32.225588 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 02:52:32.225598 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 02:52:32.225630 systemd-journald[1153]: Collecting audit messages is disabled. May 27 02:52:32.225652 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:52:32.225664 systemd-journald[1153]: Journal started May 27 02:52:32.225683 systemd-journald[1153]: Runtime Journal (/run/log/journal/66251be7b2c4462eb6e29c6979158aaf) is 6M, max 48.5M, 42.4M free. May 27 02:52:32.022604 systemd[1]: Queued start job for default target multi-user.target. May 27 02:52:32.046134 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 02:52:32.046514 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 02:52:32.228007 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 02:52:32.228040 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 02:52:32.229629 systemd[1]: Started systemd-journald.service - Journal Service. May 27 02:52:32.230344 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:52:32.232360 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:52:32.233381 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 02:52:32.233533 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 02:52:32.234558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:52:32.235356 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:52:32.236431 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 02:52:32.236593 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 02:52:32.237576 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:52:32.237723 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:52:32.238875 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 02:52:32.239957 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:52:32.242680 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 02:52:32.243842 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 02:52:32.255774 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 02:52:32.257865 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 02:52:32.259625 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 02:52:32.260437 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 02:52:32.260463 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 02:52:32.262057 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 02:52:32.277054 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 02:52:32.277931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:52:32.278995 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 02:52:32.280790 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 02:52:32.281893 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 02:52:32.286470 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 02:52:32.287587 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 02:52:32.288517 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:52:32.292581 systemd-journald[1153]: Time spent on flushing to /var/log/journal/66251be7b2c4462eb6e29c6979158aaf is 25.142ms for 877 entries. May 27 02:52:32.292581 systemd-journald[1153]: System Journal (/var/log/journal/66251be7b2c4462eb6e29c6979158aaf) is 8M, max 195.6M, 187.6M free. May 27 02:52:32.326388 systemd-journald[1153]: Received client request to flush runtime journal. May 27 02:52:32.326450 kernel: loop0: detected capacity change from 0 to 107312 May 27 02:52:32.326463 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 02:52:32.292271 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 02:52:32.301349 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 02:52:32.305407 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:52:32.306767 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 02:52:32.308646 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 02:52:32.314027 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 02:52:32.315687 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 02:52:32.322722 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 02:52:32.327987 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 02:52:32.334121 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:52:32.347427 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 02:52:32.351333 kernel: loop1: detected capacity change from 0 to 138376 May 27 02:52:32.365555 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 02:52:32.368107 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 02:52:32.376364 kernel: loop2: detected capacity change from 0 to 207008 May 27 02:52:32.399247 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. May 27 02:52:32.399265 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. May 27 02:52:32.403384 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:52:32.406432 kernel: loop3: detected capacity change from 0 to 107312 May 27 02:52:32.412324 kernel: loop4: detected capacity change from 0 to 138376 May 27 02:52:32.421323 kernel: loop5: detected capacity change from 0 to 207008 May 27 02:52:32.426604 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 02:52:32.427008 (sd-merge)[1222]: Merged extensions into '/usr'. May 27 02:52:32.430501 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... May 27 02:52:32.430515 systemd[1]: Reloading... May 27 02:52:32.482254 zram_generator::config[1247]: No configuration found. May 27 02:52:32.562075 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 02:52:32.565585 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:52:32.626663 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 02:52:32.626918 systemd[1]: Reloading finished in 196 ms. May 27 02:52:32.643669 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 02:52:32.646336 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 02:52:32.658420 systemd[1]: Starting ensure-sysext.service... May 27 02:52:32.660116 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 02:52:32.670840 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... May 27 02:52:32.670855 systemd[1]: Reloading... May 27 02:52:32.678703 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 02:52:32.678736 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 02:52:32.678961 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 02:52:32.679140 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 02:52:32.679735 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 02:52:32.679949 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. May 27 02:52:32.679997 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. May 27 02:52:32.682448 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. May 27 02:52:32.682460 systemd-tmpfiles[1284]: Skipping /boot May 27 02:52:32.691587 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. May 27 02:52:32.691602 systemd-tmpfiles[1284]: Skipping /boot May 27 02:52:32.719338 zram_generator::config[1311]: No configuration found. May 27 02:52:32.790413 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:52:32.851677 systemd[1]: Reloading finished in 180 ms. May 27 02:52:32.871661 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 02:52:32.882218 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:52:32.890563 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:52:32.892691 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 02:52:32.900765 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 02:52:32.905942 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 02:52:32.913494 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:52:32.919545 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 02:52:32.922864 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:52:32.930583 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:52:32.932739 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:52:32.935601 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:52:32.936408 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:52:32.936512 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:52:32.938536 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 02:52:32.942760 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 02:52:32.944162 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:52:32.944348 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:52:32.946051 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:52:32.946195 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:52:32.947875 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:52:32.948247 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:52:32.953381 systemd-udevd[1352]: Using default interface naming scheme 'v255'. May 27 02:52:32.958020 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:52:32.961521 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:52:32.963643 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:52:32.965921 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:52:32.967106 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:52:32.967218 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:52:32.968502 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 02:52:32.972497 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 02:52:32.974070 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 02:52:32.975651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:52:32.975787 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:52:32.977803 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:52:32.981829 augenrules[1390]: No rules May 27 02:52:32.982108 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:52:32.982276 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:52:32.984425 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:52:32.984611 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:52:32.986001 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 02:52:33.002049 systemd[1]: Finished ensure-sysext.service. May 27 02:52:33.006292 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 02:52:33.008150 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:52:33.009443 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:52:33.016639 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:52:33.017415 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:52:33.019267 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:52:33.027564 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 02:52:33.031515 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:52:33.032397 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:52:33.032443 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:52:33.033831 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 02:52:33.034571 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 02:52:33.036030 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 02:52:33.038393 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 02:52:33.038900 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:52:33.039073 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:52:33.040130 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 02:52:33.040287 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 02:52:33.041245 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:52:33.041396 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:52:33.048434 augenrules[1430]: /sbin/augenrules: No change May 27 02:52:33.050029 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 02:52:33.063749 augenrules[1456]: No rules May 27 02:52:33.064919 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:52:33.065196 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:52:33.073239 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 27 02:52:33.126777 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 02:52:33.130541 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 02:52:33.160124 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 02:52:33.161204 systemd[1]: Reached target time-set.target - System Time Set. May 27 02:52:33.164633 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 02:52:33.169301 systemd-networkd[1439]: lo: Link UP May 27 02:52:33.169416 systemd-networkd[1439]: lo: Gained carrier May 27 02:52:33.170507 systemd-networkd[1439]: Enumeration completed May 27 02:52:33.171105 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:52:33.171112 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 02:52:33.171616 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 02:52:33.173749 systemd-networkd[1439]: eth0: Link UP May 27 02:52:33.173938 systemd-networkd[1439]: eth0: Gained carrier May 27 02:52:33.173952 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:52:33.176666 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 02:52:33.178590 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 02:52:33.181888 systemd-resolved[1350]: Positive Trust Anchors: May 27 02:52:33.181903 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 02:52:33.181940 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 02:52:33.187376 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.73/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 02:52:33.187764 systemd-resolved[1350]: Defaulting to hostname 'linux'. May 27 02:52:33.187852 systemd-timesyncd[1440]: Network configuration changed, trying to establish connection. May 27 02:52:33.189989 systemd-timesyncd[1440]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 02:52:33.190043 systemd-timesyncd[1440]: Initial clock synchronization to Tue 2025-05-27 02:52:32.802778 UTC. May 27 02:52:33.193990 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 02:52:33.194975 systemd[1]: Reached target network.target - Network. May 27 02:52:33.195929 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 02:52:33.197158 systemd[1]: Reached target sysinit.target - System Initialization. May 27 02:52:33.198291 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 02:52:33.199539 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 02:52:33.200886 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 02:52:33.202012 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 02:52:33.203230 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 02:52:33.204438 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 02:52:33.204472 systemd[1]: Reached target paths.target - Path Units. May 27 02:52:33.205340 systemd[1]: Reached target timers.target - Timer Units. May 27 02:52:33.207017 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 02:52:33.209457 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 02:52:33.212505 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 02:52:33.213849 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 02:52:33.215123 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 02:52:33.241027 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 02:52:33.242180 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 02:52:33.243825 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 02:52:33.244896 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 02:52:33.252589 systemd[1]: Reached target sockets.target - Socket Units. May 27 02:52:33.253289 systemd[1]: Reached target basic.target - Basic System. May 27 02:52:33.253978 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 02:52:33.254007 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 02:52:33.255009 systemd[1]: Starting containerd.service - containerd container runtime... May 27 02:52:33.256691 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 02:52:33.258451 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 02:52:33.263387 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 02:52:33.265162 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 02:52:33.266145 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 02:52:33.267062 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 02:52:33.270542 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 02:52:33.271659 jq[1497]: false May 27 02:52:33.272392 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 02:52:33.275933 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 02:52:33.281444 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 02:52:33.282980 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:52:33.283169 extend-filesystems[1498]: Found loop3 May 27 02:52:33.284475 extend-filesystems[1498]: Found loop4 May 27 02:52:33.284475 extend-filesystems[1498]: Found loop5 May 27 02:52:33.284475 extend-filesystems[1498]: Found vda May 27 02:52:33.284475 extend-filesystems[1498]: Found vda1 May 27 02:52:33.284475 extend-filesystems[1498]: Found vda2 May 27 02:52:33.284475 extend-filesystems[1498]: Found vda3 May 27 02:52:33.284475 extend-filesystems[1498]: Found usr May 27 02:52:33.284475 extend-filesystems[1498]: Found vda4 May 27 02:52:33.284475 extend-filesystems[1498]: Found vda6 May 27 02:52:33.284475 extend-filesystems[1498]: Found vda7 May 27 02:52:33.284475 extend-filesystems[1498]: Found vda9 May 27 02:52:33.284475 extend-filesystems[1498]: Checking size of /dev/vda9 May 27 02:52:33.284605 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 02:52:33.284989 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 02:52:33.285517 systemd[1]: Starting update-engine.service - Update Engine... May 27 02:52:33.294758 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 02:52:33.300215 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 02:52:33.302783 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 02:52:33.303010 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 02:52:33.303252 systemd[1]: motdgen.service: Deactivated successfully. May 27 02:52:33.303432 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 02:52:33.306346 extend-filesystems[1498]: Resized partition /dev/vda9 May 27 02:52:33.309918 extend-filesystems[1523]: resize2fs 1.47.2 (1-Jan-2025) May 27 02:52:33.313780 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 02:52:33.316370 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 02:52:33.316724 jq[1515]: true May 27 02:52:33.318331 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 02:52:33.342097 (ntainerd)[1526]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 02:52:33.354324 jq[1525]: true May 27 02:52:33.354437 update_engine[1514]: I20250527 02:52:33.353379 1514 main.cc:92] Flatcar Update Engine starting May 27 02:52:33.364334 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 02:52:33.378211 extend-filesystems[1523]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 02:52:33.378211 extend-filesystems[1523]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 02:52:33.378211 extend-filesystems[1523]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 02:52:33.389154 extend-filesystems[1498]: Resized filesystem in /dev/vda9 May 27 02:52:33.383934 dbus-daemon[1495]: [system] SELinux support is enabled May 27 02:52:33.379665 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 02:52:33.390053 tar[1522]: linux-arm64/LICENSE May 27 02:52:33.390053 tar[1522]: linux-arm64/helm May 27 02:52:33.379936 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 02:52:33.391388 update_engine[1514]: I20250527 02:52:33.391069 1514 update_check_scheduler.cc:74] Next update check in 7m50s May 27 02:52:33.411462 systemd-logind[1506]: Watching system buttons on /dev/input/event0 (Power Button) May 27 02:52:33.411942 systemd-logind[1506]: New seat seat0. May 27 02:52:33.419247 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 02:52:33.422258 bash[1555]: Updated "/home/core/.ssh/authorized_keys" May 27 02:52:33.423946 systemd[1]: Started systemd-logind.service - User Login Management. May 27 02:52:33.426709 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 02:52:33.432694 dbus-daemon[1495]: [system] Successfully activated service 'org.freedesktop.systemd1' May 27 02:52:33.436738 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:52:33.437833 systemd[1]: Started update-engine.service - Update Engine. May 27 02:52:33.439513 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 02:52:33.439674 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 02:52:33.439784 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 02:52:33.442528 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 02:52:33.442638 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 02:52:33.445520 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 02:52:33.508468 locksmithd[1560]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 02:52:33.537615 sshd_keygen[1516]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 02:52:33.558265 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 02:52:33.561794 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 02:52:33.574841 systemd[1]: issuegen.service: Deactivated successfully. May 27 02:52:33.575049 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 02:52:33.576177 containerd[1526]: time="2025-05-27T02:52:33Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 02:52:33.577640 containerd[1526]: time="2025-05-27T02:52:33.577603320Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 02:52:33.585672 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.586357320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.16µs" May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.586383040Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.586398720Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.586534560Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.586550000Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.586569840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.586614080Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.586623760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.587047560Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.587065000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.587075880Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 02:52:33.588410 containerd[1526]: time="2025-05-27T02:52:33.587083760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 02:52:33.588618 containerd[1526]: time="2025-05-27T02:52:33.587165880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 02:52:33.588618 containerd[1526]: time="2025-05-27T02:52:33.587379240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 02:52:33.588618 containerd[1526]: time="2025-05-27T02:52:33.587408960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 02:52:33.588618 containerd[1526]: time="2025-05-27T02:52:33.587418840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 02:52:33.588618 containerd[1526]: time="2025-05-27T02:52:33.587445320Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 02:52:33.588618 containerd[1526]: time="2025-05-27T02:52:33.587637440Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 02:52:33.588618 containerd[1526]: time="2025-05-27T02:52:33.587694960Z" level=info msg="metadata content store policy set" policy=shared May 27 02:52:33.590681 containerd[1526]: time="2025-05-27T02:52:33.590648120Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 02:52:33.590802 containerd[1526]: time="2025-05-27T02:52:33.590700560Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 02:52:33.590802 containerd[1526]: time="2025-05-27T02:52:33.590715120Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 02:52:33.590802 containerd[1526]: time="2025-05-27T02:52:33.590729520Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 02:52:33.590802 containerd[1526]: time="2025-05-27T02:52:33.590740840Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 02:52:33.590802 containerd[1526]: time="2025-05-27T02:52:33.590752800Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 02:52:33.590802 containerd[1526]: time="2025-05-27T02:52:33.590763480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 02:52:33.590802 containerd[1526]: time="2025-05-27T02:52:33.590774200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 02:52:33.590802 containerd[1526]: time="2025-05-27T02:52:33.590783800Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 02:52:33.590802 containerd[1526]: time="2025-05-27T02:52:33.590793960Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 02:52:33.590802 containerd[1526]: time="2025-05-27T02:52:33.590803760Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 02:52:33.590971 containerd[1526]: time="2025-05-27T02:52:33.590823960Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 02:52:33.590971 containerd[1526]: time="2025-05-27T02:52:33.590925320Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 02:52:33.590971 containerd[1526]: time="2025-05-27T02:52:33.590945000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 02:52:33.590971 containerd[1526]: time="2025-05-27T02:52:33.590963720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 02:52:33.591035 containerd[1526]: time="2025-05-27T02:52:33.590974120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 02:52:33.591035 containerd[1526]: time="2025-05-27T02:52:33.590984400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 02:52:33.591035 containerd[1526]: time="2025-05-27T02:52:33.590994760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 02:52:33.591035 containerd[1526]: time="2025-05-27T02:52:33.591004640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 02:52:33.591035 containerd[1526]: time="2025-05-27T02:52:33.591014040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 02:52:33.591035 containerd[1526]: time="2025-05-27T02:52:33.591023920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 02:52:33.591035 containerd[1526]: time="2025-05-27T02:52:33.591033880Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 02:52:33.591147 containerd[1526]: time="2025-05-27T02:52:33.591043640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 02:52:33.591248 containerd[1526]: time="2025-05-27T02:52:33.591219160Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 02:52:33.591248 containerd[1526]: time="2025-05-27T02:52:33.591240240Z" level=info msg="Start snapshots syncer" May 27 02:52:33.591502 containerd[1526]: time="2025-05-27T02:52:33.591265720Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 02:52:33.591532 containerd[1526]: time="2025-05-27T02:52:33.591487880Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 02:52:33.591652 containerd[1526]: time="2025-05-27T02:52:33.591534320Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 02:52:33.591652 containerd[1526]: time="2025-05-27T02:52:33.591608880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 02:52:33.591727 containerd[1526]: time="2025-05-27T02:52:33.591710440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 02:52:33.591762 containerd[1526]: time="2025-05-27T02:52:33.591735880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 02:52:33.591762 containerd[1526]: time="2025-05-27T02:52:33.591746720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 02:52:33.591762 containerd[1526]: time="2025-05-27T02:52:33.591758360Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 02:52:33.591817 containerd[1526]: time="2025-05-27T02:52:33.591770160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 02:52:33.591817 containerd[1526]: time="2025-05-27T02:52:33.591780520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 02:52:33.591817 containerd[1526]: time="2025-05-27T02:52:33.591790400Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 02:52:33.591866 containerd[1526]: time="2025-05-27T02:52:33.591821680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 02:52:33.591866 containerd[1526]: time="2025-05-27T02:52:33.591834680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 02:52:33.591866 containerd[1526]: time="2025-05-27T02:52:33.591845080Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 02:52:33.591935 containerd[1526]: time="2025-05-27T02:52:33.591888760Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 02:52:33.591935 containerd[1526]: time="2025-05-27T02:52:33.591902920Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 02:52:33.591935 containerd[1526]: time="2025-05-27T02:52:33.591911120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 02:52:33.591935 containerd[1526]: time="2025-05-27T02:52:33.591920200Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 02:52:33.591935 containerd[1526]: time="2025-05-27T02:52:33.591928400Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 02:52:33.591935 containerd[1526]: time="2025-05-27T02:52:33.591937760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 02:52:33.591935 containerd[1526]: time="2025-05-27T02:52:33.591948080Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 02:52:33.592437 containerd[1526]: time="2025-05-27T02:52:33.592026440Z" level=info msg="runtime interface created" May 27 02:52:33.592437 containerd[1526]: time="2025-05-27T02:52:33.592031520Z" level=info msg="created NRI interface" May 27 02:52:33.592437 containerd[1526]: time="2025-05-27T02:52:33.592043040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 02:52:33.592437 containerd[1526]: time="2025-05-27T02:52:33.592053680Z" level=info msg="Connect containerd service" May 27 02:52:33.592437 containerd[1526]: time="2025-05-27T02:52:33.592078000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 02:52:33.592874 containerd[1526]: time="2025-05-27T02:52:33.592833520Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 02:52:33.603054 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 02:52:33.605514 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 02:52:33.607626 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 27 02:52:33.608572 systemd[1]: Reached target getty.target - Login Prompts. May 27 02:52:33.706268 containerd[1526]: time="2025-05-27T02:52:33.706174360Z" level=info msg="Start subscribing containerd event" May 27 02:52:33.706493 containerd[1526]: time="2025-05-27T02:52:33.706468680Z" level=info msg="Start recovering state" May 27 02:52:33.706654 containerd[1526]: time="2025-05-27T02:52:33.706485840Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 02:52:33.706866 containerd[1526]: time="2025-05-27T02:52:33.706848920Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 02:52:33.707080 containerd[1526]: time="2025-05-27T02:52:33.706764240Z" level=info msg="Start event monitor" May 27 02:52:33.707333 containerd[1526]: time="2025-05-27T02:52:33.707303600Z" level=info msg="Start cni network conf syncer for default" May 27 02:52:33.707404 containerd[1526]: time="2025-05-27T02:52:33.707392360Z" level=info msg="Start streaming server" May 27 02:52:33.707462 containerd[1526]: time="2025-05-27T02:52:33.707443160Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 02:52:33.707555 containerd[1526]: time="2025-05-27T02:52:33.707545000Z" level=info msg="runtime interface starting up..." May 27 02:52:33.707680 containerd[1526]: time="2025-05-27T02:52:33.707667120Z" level=info msg="starting plugins..." May 27 02:52:33.707902 containerd[1526]: time="2025-05-27T02:52:33.707724840Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 02:52:33.708097 containerd[1526]: time="2025-05-27T02:52:33.708082560Z" level=info msg="containerd successfully booted in 0.132220s" May 27 02:52:33.708181 systemd[1]: Started containerd.service - containerd container runtime. May 27 02:52:33.791733 tar[1522]: linux-arm64/README.md May 27 02:52:33.809540 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 02:52:34.382457 systemd-networkd[1439]: eth0: Gained IPv6LL May 27 02:52:34.384572 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 02:52:34.386102 systemd[1]: Reached target network-online.target - Network is Online. May 27 02:52:34.389690 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 02:52:34.391562 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:52:34.402743 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 02:52:34.416613 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 02:52:34.416796 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 02:52:34.418229 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 02:52:34.420868 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 02:52:34.907863 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:52:34.909041 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 02:52:34.912002 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:52:34.913397 systemd[1]: Startup finished in 2.083s (kernel) + 5.034s (initrd) + 3.299s (userspace) = 10.418s. May 27 02:52:35.290478 kubelet[1629]: E0527 02:52:35.290368 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:52:35.292839 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:52:35.292968 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:52:35.293261 systemd[1]: kubelet.service: Consumed 802ms CPU time, 256.6M memory peak. May 27 02:52:39.690512 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 02:52:39.691546 systemd[1]: Started sshd@0-10.0.0.73:22-10.0.0.1:36528.service - OpenSSH per-connection server daemon (10.0.0.1:36528). May 27 02:52:39.752129 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 36528 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:52:39.753602 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:52:39.760598 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 02:52:39.761413 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 02:52:39.767364 systemd-logind[1506]: New session 1 of user core. May 27 02:52:39.795347 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 02:52:39.797689 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 02:52:39.814256 (systemd)[1646]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 02:52:39.816244 systemd-logind[1506]: New session c1 of user core. May 27 02:52:39.917297 systemd[1646]: Queued start job for default target default.target. May 27 02:52:39.928125 systemd[1646]: Created slice app.slice - User Application Slice. May 27 02:52:39.928151 systemd[1646]: Reached target paths.target - Paths. May 27 02:52:39.928184 systemd[1646]: Reached target timers.target - Timers. May 27 02:52:39.929273 systemd[1646]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 02:52:39.937414 systemd[1646]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 02:52:39.937472 systemd[1646]: Reached target sockets.target - Sockets. May 27 02:52:39.937505 systemd[1646]: Reached target basic.target - Basic System. May 27 02:52:39.937530 systemd[1646]: Reached target default.target - Main User Target. May 27 02:52:39.937551 systemd[1646]: Startup finished in 116ms. May 27 02:52:39.937801 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 02:52:39.939810 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 02:52:40.001084 systemd[1]: Started sshd@1-10.0.0.73:22-10.0.0.1:36536.service - OpenSSH per-connection server daemon (10.0.0.1:36536). May 27 02:52:40.040236 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 36536 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:52:40.041296 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:52:40.044688 systemd-logind[1506]: New session 2 of user core. May 27 02:52:40.057462 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 02:52:40.106949 sshd[1659]: Connection closed by 10.0.0.1 port 36536 May 27 02:52:40.107214 sshd-session[1657]: pam_unix(sshd:session): session closed for user core May 27 02:52:40.119119 systemd[1]: sshd@1-10.0.0.73:22-10.0.0.1:36536.service: Deactivated successfully. May 27 02:52:40.121615 systemd[1]: session-2.scope: Deactivated successfully. May 27 02:52:40.122351 systemd-logind[1506]: Session 2 logged out. Waiting for processes to exit. May 27 02:52:40.124547 systemd[1]: Started sshd@2-10.0.0.73:22-10.0.0.1:36540.service - OpenSSH per-connection server daemon (10.0.0.1:36540). May 27 02:52:40.125343 systemd-logind[1506]: Removed session 2. May 27 02:52:40.184367 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 36540 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:52:40.185597 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:52:40.189480 systemd-logind[1506]: New session 3 of user core. May 27 02:52:40.200455 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 02:52:40.247003 sshd[1667]: Connection closed by 10.0.0.1 port 36540 May 27 02:52:40.247298 sshd-session[1665]: pam_unix(sshd:session): session closed for user core May 27 02:52:40.258901 systemd[1]: sshd@2-10.0.0.73:22-10.0.0.1:36540.service: Deactivated successfully. May 27 02:52:40.262141 systemd[1]: session-3.scope: Deactivated successfully. May 27 02:52:40.262907 systemd-logind[1506]: Session 3 logged out. Waiting for processes to exit. May 27 02:52:40.265979 systemd[1]: Started sshd@3-10.0.0.73:22-10.0.0.1:36556.service - OpenSSH per-connection server daemon (10.0.0.1:36556). May 27 02:52:40.266400 systemd-logind[1506]: Removed session 3. May 27 02:52:40.316544 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 36556 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:52:40.317728 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:52:40.322205 systemd-logind[1506]: New session 4 of user core. May 27 02:52:40.330458 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 02:52:40.382112 sshd[1675]: Connection closed by 10.0.0.1 port 36556 May 27 02:52:40.381972 sshd-session[1673]: pam_unix(sshd:session): session closed for user core May 27 02:52:40.392338 systemd[1]: sshd@3-10.0.0.73:22-10.0.0.1:36556.service: Deactivated successfully. May 27 02:52:40.394791 systemd[1]: session-4.scope: Deactivated successfully. May 27 02:52:40.395514 systemd-logind[1506]: Session 4 logged out. Waiting for processes to exit. May 27 02:52:40.398444 systemd[1]: Started sshd@4-10.0.0.73:22-10.0.0.1:36570.service - OpenSSH per-connection server daemon (10.0.0.1:36570). May 27 02:52:40.398967 systemd-logind[1506]: Removed session 4. May 27 02:52:40.451640 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 36570 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:52:40.452765 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:52:40.457061 systemd-logind[1506]: New session 5 of user core. May 27 02:52:40.465476 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 02:52:40.524251 sudo[1684]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 02:52:40.524560 sudo[1684]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:52:40.535949 sudo[1684]: pam_unix(sudo:session): session closed for user root May 27 02:52:40.539059 sshd[1683]: Connection closed by 10.0.0.1 port 36570 May 27 02:52:40.539623 sshd-session[1681]: pam_unix(sshd:session): session closed for user core May 27 02:52:40.547675 systemd[1]: sshd@4-10.0.0.73:22-10.0.0.1:36570.service: Deactivated successfully. May 27 02:52:40.549436 systemd[1]: session-5.scope: Deactivated successfully. May 27 02:52:40.550194 systemd-logind[1506]: Session 5 logged out. Waiting for processes to exit. May 27 02:52:40.553707 systemd[1]: Started sshd@5-10.0.0.73:22-10.0.0.1:36584.service - OpenSSH per-connection server daemon (10.0.0.1:36584). May 27 02:52:40.554411 systemd-logind[1506]: Removed session 5. May 27 02:52:40.609245 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 36584 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:52:40.611826 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:52:40.618569 systemd-logind[1506]: New session 6 of user core. May 27 02:52:40.626478 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 02:52:40.676650 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 02:52:40.677244 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:52:40.752521 sudo[1694]: pam_unix(sudo:session): session closed for user root May 27 02:52:40.757707 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 02:52:40.757969 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:52:40.771386 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:52:40.813603 augenrules[1716]: No rules May 27 02:52:40.814751 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:52:40.814986 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:52:40.816506 sudo[1693]: pam_unix(sudo:session): session closed for user root May 27 02:52:40.817627 sshd[1692]: Connection closed by 10.0.0.1 port 36584 May 27 02:52:40.817940 sshd-session[1690]: pam_unix(sshd:session): session closed for user core May 27 02:52:40.826763 systemd[1]: sshd@5-10.0.0.73:22-10.0.0.1:36584.service: Deactivated successfully. May 27 02:52:40.829652 systemd[1]: session-6.scope: Deactivated successfully. May 27 02:52:40.830357 systemd-logind[1506]: Session 6 logged out. Waiting for processes to exit. May 27 02:52:40.838491 systemd[1]: Started sshd@6-10.0.0.73:22-10.0.0.1:36588.service - OpenSSH per-connection server daemon (10.0.0.1:36588). May 27 02:52:40.839120 systemd-logind[1506]: Removed session 6. May 27 02:52:40.896277 sshd[1725]: Accepted publickey for core from 10.0.0.1 port 36588 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:52:40.897858 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:52:40.902365 systemd-logind[1506]: New session 7 of user core. May 27 02:52:40.911458 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 02:52:40.960920 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 02:52:40.961502 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:52:41.352194 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 02:52:41.377738 (dockerd)[1748]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 02:52:41.637884 dockerd[1748]: time="2025-05-27T02:52:41.637737899Z" level=info msg="Starting up" May 27 02:52:41.639067 dockerd[1748]: time="2025-05-27T02:52:41.639011036Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 02:52:41.664677 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1358185722-merged.mount: Deactivated successfully. May 27 02:52:41.677476 systemd[1]: var-lib-docker-metacopy\x2dcheck4027637047-merged.mount: Deactivated successfully. May 27 02:52:41.688682 dockerd[1748]: time="2025-05-27T02:52:41.688493100Z" level=info msg="Loading containers: start." May 27 02:52:41.699323 kernel: Initializing XFRM netlink socket May 27 02:52:41.922469 systemd-networkd[1439]: docker0: Link UP May 27 02:52:41.926010 dockerd[1748]: time="2025-05-27T02:52:41.925899413Z" level=info msg="Loading containers: done." May 27 02:52:41.940031 dockerd[1748]: time="2025-05-27T02:52:41.939980876Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 02:52:41.940187 dockerd[1748]: time="2025-05-27T02:52:41.940055432Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 02:52:41.940187 dockerd[1748]: time="2025-05-27T02:52:41.940145840Z" level=info msg="Initializing buildkit" May 27 02:52:41.958965 dockerd[1748]: time="2025-05-27T02:52:41.958919646Z" level=info msg="Completed buildkit initialization" May 27 02:52:41.965254 dockerd[1748]: time="2025-05-27T02:52:41.965208895Z" level=info msg="Daemon has completed initialization" May 27 02:52:41.965408 dockerd[1748]: time="2025-05-27T02:52:41.965299970Z" level=info msg="API listen on /run/docker.sock" May 27 02:52:41.965465 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 02:52:42.614942 containerd[1526]: time="2025-05-27T02:52:42.614900130Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\"" May 27 02:52:43.222589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2217826470.mount: Deactivated successfully. May 27 02:52:44.367191 containerd[1526]: time="2025-05-27T02:52:44.367118052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:44.367707 containerd[1526]: time="2025-05-27T02:52:44.367645767Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.5: active requests=0, bytes read=26326313" May 27 02:52:44.368843 containerd[1526]: time="2025-05-27T02:52:44.368817024Z" level=info msg="ImageCreate event name:\"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:44.371036 containerd[1526]: time="2025-05-27T02:52:44.371001140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:44.372404 containerd[1526]: time="2025-05-27T02:52:44.372362111Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.5\" with image id \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0bee1bf751fe06009678c0cde7545443ba3a8d2edf71cea4c69cbb5774b9bf47\", size \"26323111\" in 1.757420884s" May 27 02:52:44.372474 containerd[1526]: time="2025-05-27T02:52:44.372406064Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.5\" returns image reference \"sha256:42968274c3d27c41cdc146f5442f122c1c74960e299c13e2f348d2fe835a9134\"" May 27 02:52:44.373035 containerd[1526]: time="2025-05-27T02:52:44.373012366Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\"" May 27 02:52:45.543353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 02:52:45.544746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:52:45.684952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:52:45.688406 (kubelet)[2018]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:52:45.724194 kubelet[2018]: E0527 02:52:45.724092 2018 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:52:45.727074 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:52:45.727209 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:52:45.728428 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.7M memory peak. May 27 02:52:46.126708 containerd[1526]: time="2025-05-27T02:52:46.126656452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:46.128374 containerd[1526]: time="2025-05-27T02:52:46.128336278Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.5: active requests=0, bytes read=22530549" May 27 02:52:46.129473 containerd[1526]: time="2025-05-27T02:52:46.129417674Z" level=info msg="ImageCreate event name:\"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:46.131566 containerd[1526]: time="2025-05-27T02:52:46.131537608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:46.132611 containerd[1526]: time="2025-05-27T02:52:46.132575472Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.5\" with image id \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:79bcf2f5e614c336c02dcea9dfcdf485d7297aed6a21239a99c87f7164f9baca\", size \"24066313\" in 1.759533188s" May 27 02:52:46.132611 containerd[1526]: time="2025-05-27T02:52:46.132608665Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.5\" returns image reference \"sha256:82042044d6ea1f1e5afda9c7351883800adbde447314786c4e5a2fd9e42aab09\"" May 27 02:52:46.133017 containerd[1526]: time="2025-05-27T02:52:46.132998469Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\"" May 27 02:52:47.426809 containerd[1526]: time="2025-05-27T02:52:47.426723655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:47.427696 containerd[1526]: time="2025-05-27T02:52:47.427658097Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.5: active requests=0, bytes read=17484192" May 27 02:52:47.428533 containerd[1526]: time="2025-05-27T02:52:47.428509573Z" level=info msg="ImageCreate event name:\"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:47.431372 containerd[1526]: time="2025-05-27T02:52:47.431338915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:47.432395 containerd[1526]: time="2025-05-27T02:52:47.432356085Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.5\" with image id \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f0f39d8b9808c407cacb3a46a5a9ce4d4a4a7cf3b674ba4bd221f5bc90051d2a\", size \"19019974\" in 1.29932968s" May 27 02:52:47.432395 containerd[1526]: time="2025-05-27T02:52:47.432391699Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.5\" returns image reference \"sha256:e149336437f90109dad736c8a42e4b73c137a66579be8f3b9a456bcc62af3f9b\"" May 27 02:52:47.433396 containerd[1526]: time="2025-05-27T02:52:47.433374763Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 27 02:52:48.457014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3589472057.mount: Deactivated successfully. May 27 02:52:48.670172 containerd[1526]: time="2025-05-27T02:52:48.670113715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:48.670580 containerd[1526]: time="2025-05-27T02:52:48.670544516Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.5: active requests=0, bytes read=27377377" May 27 02:52:48.671259 containerd[1526]: time="2025-05-27T02:52:48.671219719Z" level=info msg="ImageCreate event name:\"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:48.673386 containerd[1526]: time="2025-05-27T02:52:48.673358201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:48.676564 containerd[1526]: time="2025-05-27T02:52:48.676525983Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.5\" with image id \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\", repo tag \"registry.k8s.io/kube-proxy:v1.32.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b\", size \"27376394\" in 1.243119562s" May 27 02:52:48.676609 containerd[1526]: time="2025-05-27T02:52:48.676561675Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 27 02:52:48.678261 containerd[1526]: time="2025-05-27T02:52:48.678237833Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 02:52:49.302479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2023896576.mount: Deactivated successfully. May 27 02:52:50.117451 containerd[1526]: time="2025-05-27T02:52:50.117390971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:50.118109 containerd[1526]: time="2025-05-27T02:52:50.118056825Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 27 02:52:50.118708 containerd[1526]: time="2025-05-27T02:52:50.118688793Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:50.121593 containerd[1526]: time="2025-05-27T02:52:50.121557747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:50.123173 containerd[1526]: time="2025-05-27T02:52:50.123140373Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.444868855s" May 27 02:52:50.123211 containerd[1526]: time="2025-05-27T02:52:50.123176963Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 27 02:52:50.123684 containerd[1526]: time="2025-05-27T02:52:50.123635329Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 02:52:50.591351 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294176031.mount: Deactivated successfully. May 27 02:52:50.595674 containerd[1526]: time="2025-05-27T02:52:50.595633323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:52:50.596373 containerd[1526]: time="2025-05-27T02:52:50.596270620Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 27 02:52:50.596966 containerd[1526]: time="2025-05-27T02:52:50.596939736Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:52:50.598782 containerd[1526]: time="2025-05-27T02:52:50.598743292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:52:50.599787 containerd[1526]: time="2025-05-27T02:52:50.599442434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 475.777834ms" May 27 02:52:50.599828 containerd[1526]: time="2025-05-27T02:52:50.599793378Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 02:52:50.600437 containerd[1526]: time="2025-05-27T02:52:50.600412222Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" May 27 02:52:51.123084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount434396428.mount: Deactivated successfully. May 27 02:52:53.497935 containerd[1526]: time="2025-05-27T02:52:53.497868106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:53.499584 containerd[1526]: time="2025-05-27T02:52:53.499543987Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" May 27 02:52:53.500298 containerd[1526]: time="2025-05-27T02:52:53.500274380Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:53.503504 containerd[1526]: time="2025-05-27T02:52:53.503458904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:52:53.505165 containerd[1526]: time="2025-05-27T02:52:53.505129127Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.904686229s" May 27 02:52:53.505202 containerd[1526]: time="2025-05-27T02:52:53.505162319Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" May 27 02:52:55.730023 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 02:52:55.731920 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:52:55.918426 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:52:55.931738 (kubelet)[2184]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:52:55.967264 kubelet[2184]: E0527 02:52:55.967213 2184 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:52:55.969829 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:52:55.969959 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:52:55.972381 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107.4M memory peak. May 27 02:52:58.230820 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:52:58.230956 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107.4M memory peak. May 27 02:52:58.232802 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:52:58.251773 systemd[1]: Reload requested from client PID 2200 ('systemctl') (unit session-7.scope)... May 27 02:52:58.251794 systemd[1]: Reloading... May 27 02:52:58.318391 zram_generator::config[2246]: No configuration found. May 27 02:52:58.442636 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:52:58.526296 systemd[1]: Reloading finished in 274 ms. May 27 02:52:58.572320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:52:58.574761 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:52:58.575659 systemd[1]: kubelet.service: Deactivated successfully. May 27 02:52:58.575873 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:52:58.575908 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.2M memory peak. May 27 02:52:58.577179 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:52:58.683881 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:52:58.694575 (kubelet)[2290]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 02:52:58.728693 kubelet[2290]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:52:58.728693 kubelet[2290]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 02:52:58.728693 kubelet[2290]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:52:58.729009 kubelet[2290]: I0527 02:52:58.728746 2290 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 02:52:59.543006 kubelet[2290]: I0527 02:52:59.542956 2290 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 02:52:59.543006 kubelet[2290]: I0527 02:52:59.542988 2290 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 02:52:59.543269 kubelet[2290]: I0527 02:52:59.543245 2290 server.go:954] "Client rotation is on, will bootstrap in background" May 27 02:52:59.610408 kubelet[2290]: I0527 02:52:59.610357 2290 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 02:52:59.615432 kubelet[2290]: E0527 02:52:59.615399 2290 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.73:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 27 02:52:59.620865 kubelet[2290]: I0527 02:52:59.620819 2290 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 02:52:59.623542 kubelet[2290]: I0527 02:52:59.623524 2290 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 02:52:59.624172 kubelet[2290]: I0527 02:52:59.624138 2290 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 02:52:59.624347 kubelet[2290]: I0527 02:52:59.624175 2290 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 02:52:59.624438 kubelet[2290]: I0527 02:52:59.624420 2290 topology_manager.go:138] "Creating topology manager with none policy" May 27 02:52:59.624438 kubelet[2290]: I0527 02:52:59.624429 2290 container_manager_linux.go:304] "Creating device plugin manager" May 27 02:52:59.624634 kubelet[2290]: I0527 02:52:59.624617 2290 state_mem.go:36] "Initialized new in-memory state store" May 27 02:52:59.626921 kubelet[2290]: I0527 02:52:59.626888 2290 kubelet.go:446] "Attempting to sync node with API server" May 27 02:52:59.626956 kubelet[2290]: I0527 02:52:59.626926 2290 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 02:52:59.626956 kubelet[2290]: I0527 02:52:59.626953 2290 kubelet.go:352] "Adding apiserver pod source" May 27 02:52:59.626998 kubelet[2290]: I0527 02:52:59.626962 2290 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 02:52:59.631889 kubelet[2290]: W0527 02:52:59.631835 2290 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 27 02:52:59.631930 kubelet[2290]: E0527 02:52:59.631897 2290 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.73:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 27 02:52:59.633701 kubelet[2290]: W0527 02:52:59.633667 2290 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 27 02:52:59.633775 kubelet[2290]: E0527 02:52:59.633709 2290 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.73:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 27 02:52:59.634191 kubelet[2290]: I0527 02:52:59.634169 2290 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 02:52:59.634819 kubelet[2290]: I0527 02:52:59.634785 2290 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 02:52:59.635064 kubelet[2290]: W0527 02:52:59.635032 2290 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 02:52:59.636074 kubelet[2290]: I0527 02:52:59.635911 2290 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 02:52:59.636108 kubelet[2290]: I0527 02:52:59.636095 2290 server.go:1287] "Started kubelet" May 27 02:52:59.637651 kubelet[2290]: I0527 02:52:59.636163 2290 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 02:52:59.637651 kubelet[2290]: I0527 02:52:59.636787 2290 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 02:52:59.637651 kubelet[2290]: I0527 02:52:59.637039 2290 server.go:479] "Adding debug handlers to kubelet server" May 27 02:52:59.637651 kubelet[2290]: I0527 02:52:59.637058 2290 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 02:52:59.638537 kubelet[2290]: I0527 02:52:59.637829 2290 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 02:52:59.638537 kubelet[2290]: I0527 02:52:59.637909 2290 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 02:52:59.639777 kubelet[2290]: E0527 02:52:59.639738 2290 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 02:52:59.639777 kubelet[2290]: I0527 02:52:59.639767 2290 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 02:52:59.639925 kubelet[2290]: I0527 02:52:59.639904 2290 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 02:52:59.640221 kubelet[2290]: I0527 02:52:59.639955 2290 reconciler.go:26] "Reconciler: start to sync state" May 27 02:52:59.640221 kubelet[2290]: W0527 02:52:59.640191 2290 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 27 02:52:59.640292 kubelet[2290]: E0527 02:52:59.640224 2290 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.73:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 27 02:52:59.644077 kubelet[2290]: E0527 02:52:59.644045 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="200ms" May 27 02:52:59.644544 kubelet[2290]: E0527 02:52:59.644502 2290 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 02:52:59.644915 kubelet[2290]: E0527 02:52:59.644485 2290 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.73:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.73:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184342aa51ea5695 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 02:52:59.636070037 +0000 UTC m=+0.938546566,LastTimestamp:2025-05-27 02:52:59.636070037 +0000 UTC m=+0.938546566,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 02:52:59.645068 kubelet[2290]: I0527 02:52:59.645004 2290 factory.go:221] Registration of the containerd container factory successfully May 27 02:52:59.645068 kubelet[2290]: I0527 02:52:59.645052 2290 factory.go:221] Registration of the systemd container factory successfully May 27 02:52:59.645142 kubelet[2290]: I0527 02:52:59.645121 2290 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 02:52:59.654601 kubelet[2290]: I0527 02:52:59.654567 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 02:52:59.655887 kubelet[2290]: I0527 02:52:59.655668 2290 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 02:52:59.655887 kubelet[2290]: I0527 02:52:59.655689 2290 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 02:52:59.655887 kubelet[2290]: I0527 02:52:59.655705 2290 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 02:52:59.655887 kubelet[2290]: I0527 02:52:59.655713 2290 kubelet.go:2382] "Starting kubelet main sync loop" May 27 02:52:59.655887 kubelet[2290]: E0527 02:52:59.655763 2290 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 02:52:59.656827 kubelet[2290]: W0527 02:52:59.656797 2290 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 27 02:52:59.657012 kubelet[2290]: E0527 02:52:59.656835 2290 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 27 02:52:59.657811 kubelet[2290]: I0527 02:52:59.657788 2290 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 02:52:59.657811 kubelet[2290]: I0527 02:52:59.657803 2290 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 02:52:59.657811 kubelet[2290]: I0527 02:52:59.657817 2290 state_mem.go:36] "Initialized new in-memory state store" May 27 02:52:59.740469 kubelet[2290]: E0527 02:52:59.740419 2290 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 02:52:59.756601 kubelet[2290]: E0527 02:52:59.756570 2290 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 27 02:52:59.836087 kubelet[2290]: I0527 02:52:59.836011 2290 policy_none.go:49] "None policy: Start" May 27 02:52:59.836087 kubelet[2290]: I0527 02:52:59.836034 2290 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 02:52:59.836087 kubelet[2290]: I0527 02:52:59.836046 2290 state_mem.go:35] "Initializing new in-memory state store" May 27 02:52:59.840639 kubelet[2290]: E0527 02:52:59.840613 2290 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 02:52:59.843009 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 02:52:59.844686 kubelet[2290]: E0527 02:52:59.844657 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="400ms" May 27 02:52:59.853774 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 02:52:59.857277 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 02:52:59.866248 kubelet[2290]: I0527 02:52:59.866200 2290 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 02:52:59.866451 kubelet[2290]: I0527 02:52:59.866431 2290 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 02:52:59.866484 kubelet[2290]: I0527 02:52:59.866448 2290 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 02:52:59.866675 kubelet[2290]: I0527 02:52:59.866648 2290 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 02:52:59.867799 kubelet[2290]: E0527 02:52:59.867773 2290 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 02:52:59.867858 kubelet[2290]: E0527 02:52:59.867807 2290 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 02:52:59.967709 kubelet[2290]: I0527 02:52:59.967671 2290 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 02:52:59.967986 systemd[1]: Created slice kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice - libcontainer container kubepods-burstable-pod447e79232307504a6964f3be51e3d64d.slice. May 27 02:52:59.968094 kubelet[2290]: E0527 02:52:59.968063 2290 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 27 02:52:59.987177 kubelet[2290]: E0527 02:52:59.987011 2290 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:52:59.989686 systemd[1]: Created slice kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice - libcontainer container kubepods-burstable-pod7c751acbcd1525da2f1a64e395f86bdd.slice. May 27 02:53:00.009267 kubelet[2290]: E0527 02:53:00.009246 2290 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:53:00.011580 systemd[1]: Created slice kubepods-burstable-pod36426eb75cee82d07816096475a8c81f.slice - libcontainer container kubepods-burstable-pod36426eb75cee82d07816096475a8c81f.slice. May 27 02:53:00.013160 kubelet[2290]: E0527 02:53:00.013141 2290 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:53:00.042574 kubelet[2290]: I0527 02:53:00.042538 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:53:00.042574 kubelet[2290]: I0527 02:53:00.042575 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:53:00.042652 kubelet[2290]: I0527 02:53:00.042594 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:53:00.042652 kubelet[2290]: I0527 02:53:00.042612 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36426eb75cee82d07816096475a8c81f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"36426eb75cee82d07816096475a8c81f\") " pod="kube-system/kube-apiserver-localhost" May 27 02:53:00.042652 kubelet[2290]: I0527 02:53:00.042629 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36426eb75cee82d07816096475a8c81f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"36426eb75cee82d07816096475a8c81f\") " pod="kube-system/kube-apiserver-localhost" May 27 02:53:00.042652 kubelet[2290]: I0527 02:53:00.042645 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:53:00.042736 kubelet[2290]: I0527 02:53:00.042659 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:53:00.042736 kubelet[2290]: I0527 02:53:00.042673 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 02:53:00.042736 kubelet[2290]: I0527 02:53:00.042687 2290 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36426eb75cee82d07816096475a8c81f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"36426eb75cee82d07816096475a8c81f\") " pod="kube-system/kube-apiserver-localhost" May 27 02:53:00.169512 kubelet[2290]: I0527 02:53:00.169431 2290 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 02:53:00.169810 kubelet[2290]: E0527 02:53:00.169777 2290 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 27 02:53:00.245564 kubelet[2290]: E0527 02:53:00.245522 2290 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.73:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.73:6443: connect: connection refused" interval="800ms" May 27 02:53:00.288166 containerd[1526]: time="2025-05-27T02:53:00.288121134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,}" May 27 02:53:00.305533 containerd[1526]: time="2025-05-27T02:53:00.305499294Z" level=info msg="connecting to shim 0499b1d3dddd265e937654942a771f515a8b2ee4b21a451401cca887a481d347" address="unix:///run/containerd/s/4e287bc1adb06fb1c16b9a961f2352e9c1825f7e9cd79660904443a3b52479dc" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:00.310767 containerd[1526]: time="2025-05-27T02:53:00.310705258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,}" May 27 02:53:00.314940 containerd[1526]: time="2025-05-27T02:53:00.314912484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:36426eb75cee82d07816096475a8c81f,Namespace:kube-system,Attempt:0,}" May 27 02:53:00.330504 systemd[1]: Started cri-containerd-0499b1d3dddd265e937654942a771f515a8b2ee4b21a451401cca887a481d347.scope - libcontainer container 0499b1d3dddd265e937654942a771f515a8b2ee4b21a451401cca887a481d347. May 27 02:53:00.332330 containerd[1526]: time="2025-05-27T02:53:00.331464328Z" level=info msg="connecting to shim 36e9d09892f5b1dc9ee2deee6d63989d87d2cd15edf3f6e61bb44f491e35a1ee" address="unix:///run/containerd/s/4104a5d714eaa971d519051d1b5577d6d5ca922cc20e9e170861ecaef931d551" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:00.346982 containerd[1526]: time="2025-05-27T02:53:00.346940351Z" level=info msg="connecting to shim 8b615473c714cc974f7420591314573231abdf736227cace2aed03ff026fe201" address="unix:///run/containerd/s/db980068f0f79c50d91aa70a678237412ab8a1140f31df76c8b0c913a1b8ea23" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:00.358598 systemd[1]: Started cri-containerd-36e9d09892f5b1dc9ee2deee6d63989d87d2cd15edf3f6e61bb44f491e35a1ee.scope - libcontainer container 36e9d09892f5b1dc9ee2deee6d63989d87d2cd15edf3f6e61bb44f491e35a1ee. May 27 02:53:00.374734 containerd[1526]: time="2025-05-27T02:53:00.374673244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:447e79232307504a6964f3be51e3d64d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0499b1d3dddd265e937654942a771f515a8b2ee4b21a451401cca887a481d347\"" May 27 02:53:00.377363 containerd[1526]: time="2025-05-27T02:53:00.377203036Z" level=info msg="CreateContainer within sandbox \"0499b1d3dddd265e937654942a771f515a8b2ee4b21a451401cca887a481d347\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 02:53:00.377436 systemd[1]: Started cri-containerd-8b615473c714cc974f7420591314573231abdf736227cace2aed03ff026fe201.scope - libcontainer container 8b615473c714cc974f7420591314573231abdf736227cace2aed03ff026fe201. May 27 02:53:00.383462 containerd[1526]: time="2025-05-27T02:53:00.383437051Z" level=info msg="Container 960a21303a9dc2fbbe700d50cd76bf941d18f90aa6d3cd62dc1b7981fd66749a: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:00.389696 containerd[1526]: time="2025-05-27T02:53:00.389669430Z" level=info msg="CreateContainer within sandbox \"0499b1d3dddd265e937654942a771f515a8b2ee4b21a451401cca887a481d347\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"960a21303a9dc2fbbe700d50cd76bf941d18f90aa6d3cd62dc1b7981fd66749a\"" May 27 02:53:00.390420 containerd[1526]: time="2025-05-27T02:53:00.390390145Z" level=info msg="StartContainer for \"960a21303a9dc2fbbe700d50cd76bf941d18f90aa6d3cd62dc1b7981fd66749a\"" May 27 02:53:00.393107 containerd[1526]: time="2025-05-27T02:53:00.392885109Z" level=info msg="connecting to shim 960a21303a9dc2fbbe700d50cd76bf941d18f90aa6d3cd62dc1b7981fd66749a" address="unix:///run/containerd/s/4e287bc1adb06fb1c16b9a961f2352e9c1825f7e9cd79660904443a3b52479dc" protocol=ttrpc version=3 May 27 02:53:00.404229 containerd[1526]: time="2025-05-27T02:53:00.404195922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:7c751acbcd1525da2f1a64e395f86bdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"36e9d09892f5b1dc9ee2deee6d63989d87d2cd15edf3f6e61bb44f491e35a1ee\"" May 27 02:53:00.409183 containerd[1526]: time="2025-05-27T02:53:00.409146790Z" level=info msg="CreateContainer within sandbox \"36e9d09892f5b1dc9ee2deee6d63989d87d2cd15edf3f6e61bb44f491e35a1ee\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 02:53:00.413463 systemd[1]: Started cri-containerd-960a21303a9dc2fbbe700d50cd76bf941d18f90aa6d3cd62dc1b7981fd66749a.scope - libcontainer container 960a21303a9dc2fbbe700d50cd76bf941d18f90aa6d3cd62dc1b7981fd66749a. May 27 02:53:00.416563 containerd[1526]: time="2025-05-27T02:53:00.416533510Z" level=info msg="Container 4d057aa02b508af17a301eb057bdbd742d465c555d52d2f8b73e16c434f47908: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:00.419608 containerd[1526]: time="2025-05-27T02:53:00.419504318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:36426eb75cee82d07816096475a8c81f,Namespace:kube-system,Attempt:0,} returns sandbox id \"8b615473c714cc974f7420591314573231abdf736227cace2aed03ff026fe201\"" May 27 02:53:00.422855 containerd[1526]: time="2025-05-27T02:53:00.422727746Z" level=info msg="CreateContainer within sandbox \"8b615473c714cc974f7420591314573231abdf736227cace2aed03ff026fe201\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 02:53:00.427587 containerd[1526]: time="2025-05-27T02:53:00.427557635Z" level=info msg="CreateContainer within sandbox \"36e9d09892f5b1dc9ee2deee6d63989d87d2cd15edf3f6e61bb44f491e35a1ee\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d057aa02b508af17a301eb057bdbd742d465c555d52d2f8b73e16c434f47908\"" May 27 02:53:00.428011 containerd[1526]: time="2025-05-27T02:53:00.427988866Z" level=info msg="StartContainer for \"4d057aa02b508af17a301eb057bdbd742d465c555d52d2f8b73e16c434f47908\"" May 27 02:53:00.429028 containerd[1526]: time="2025-05-27T02:53:00.428998866Z" level=info msg="connecting to shim 4d057aa02b508af17a301eb057bdbd742d465c555d52d2f8b73e16c434f47908" address="unix:///run/containerd/s/4104a5d714eaa971d519051d1b5577d6d5ca922cc20e9e170861ecaef931d551" protocol=ttrpc version=3 May 27 02:53:00.431885 containerd[1526]: time="2025-05-27T02:53:00.431847497Z" level=info msg="Container 6e79b4088b6cbe92f17dcc40f549d4885b109b822294218e4a9eff9ca17e66cc: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:00.437139 containerd[1526]: time="2025-05-27T02:53:00.437107300Z" level=info msg="CreateContainer within sandbox \"8b615473c714cc974f7420591314573231abdf736227cace2aed03ff026fe201\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e79b4088b6cbe92f17dcc40f549d4885b109b822294218e4a9eff9ca17e66cc\"" May 27 02:53:00.437664 containerd[1526]: time="2025-05-27T02:53:00.437621366Z" level=info msg="StartContainer for \"6e79b4088b6cbe92f17dcc40f549d4885b109b822294218e4a9eff9ca17e66cc\"" May 27 02:53:00.441119 containerd[1526]: time="2025-05-27T02:53:00.441087748Z" level=info msg="connecting to shim 6e79b4088b6cbe92f17dcc40f549d4885b109b822294218e4a9eff9ca17e66cc" address="unix:///run/containerd/s/db980068f0f79c50d91aa70a678237412ab8a1140f31df76c8b0c913a1b8ea23" protocol=ttrpc version=3 May 27 02:53:00.449840 systemd[1]: Started cri-containerd-4d057aa02b508af17a301eb057bdbd742d465c555d52d2f8b73e16c434f47908.scope - libcontainer container 4d057aa02b508af17a301eb057bdbd742d465c555d52d2f8b73e16c434f47908. May 27 02:53:00.456355 systemd[1]: Started cri-containerd-6e79b4088b6cbe92f17dcc40f549d4885b109b822294218e4a9eff9ca17e66cc.scope - libcontainer container 6e79b4088b6cbe92f17dcc40f549d4885b109b822294218e4a9eff9ca17e66cc. May 27 02:53:00.458195 containerd[1526]: time="2025-05-27T02:53:00.458165001Z" level=info msg="StartContainer for \"960a21303a9dc2fbbe700d50cd76bf941d18f90aa6d3cd62dc1b7981fd66749a\" returns successfully" May 27 02:53:00.503138 containerd[1526]: time="2025-05-27T02:53:00.500652922Z" level=info msg="StartContainer for \"4d057aa02b508af17a301eb057bdbd742d465c555d52d2f8b73e16c434f47908\" returns successfully" May 27 02:53:00.517700 containerd[1526]: time="2025-05-27T02:53:00.517666630Z" level=info msg="StartContainer for \"6e79b4088b6cbe92f17dcc40f549d4885b109b822294218e4a9eff9ca17e66cc\" returns successfully" May 27 02:53:00.527507 kubelet[2290]: W0527 02:53:00.524564 2290 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.73:6443: connect: connection refused May 27 02:53:00.527507 kubelet[2290]: E0527 02:53:00.524604 2290 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.73:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.73:6443: connect: connection refused" logger="UnhandledError" May 27 02:53:00.573581 kubelet[2290]: I0527 02:53:00.570658 2290 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 02:53:00.573581 kubelet[2290]: E0527 02:53:00.570953 2290 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.73:6443/api/v1/nodes\": dial tcp 10.0.0.73:6443: connect: connection refused" node="localhost" May 27 02:53:00.663131 kubelet[2290]: E0527 02:53:00.663093 2290 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:53:00.666631 kubelet[2290]: E0527 02:53:00.666603 2290 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:53:00.669408 kubelet[2290]: E0527 02:53:00.669278 2290 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:53:01.372721 kubelet[2290]: I0527 02:53:01.372694 2290 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 02:53:01.670359 kubelet[2290]: E0527 02:53:01.670260 2290 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:53:01.670446 kubelet[2290]: E0527 02:53:01.670363 2290 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 02:53:02.043116 kubelet[2290]: E0527 02:53:02.041785 2290 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 02:53:02.130276 kubelet[2290]: I0527 02:53:02.130214 2290 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 02:53:02.140707 kubelet[2290]: I0527 02:53:02.140678 2290 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 02:53:02.199036 kubelet[2290]: E0527 02:53:02.198990 2290 kubelet.go:3196] "Failed creating a mirror pod" err="namespaces \"kube-system\" not found" pod="kube-system/kube-scheduler-localhost" May 27 02:53:02.199036 kubelet[2290]: I0527 02:53:02.199027 2290 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 02:53:02.255167 kubelet[2290]: E0527 02:53:02.254809 2290 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 02:53:02.255441 kubelet[2290]: I0527 02:53:02.255299 2290 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 02:53:02.257726 kubelet[2290]: E0527 02:53:02.257697 2290 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 02:53:02.632991 kubelet[2290]: I0527 02:53:02.632744 2290 apiserver.go:52] "Watching apiserver" May 27 02:53:02.640537 kubelet[2290]: I0527 02:53:02.640501 2290 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 02:53:02.713862 kubelet[2290]: I0527 02:53:02.713720 2290 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 02:53:02.717055 kubelet[2290]: E0527 02:53:02.717031 2290 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 02:53:03.759952 kubelet[2290]: I0527 02:53:03.759916 2290 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 02:53:04.139567 systemd[1]: Reload requested from client PID 2564 ('systemctl') (unit session-7.scope)... May 27 02:53:04.139584 systemd[1]: Reloading... May 27 02:53:04.203348 zram_generator::config[2607]: No configuration found. May 27 02:53:04.272591 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:53:04.369880 systemd[1]: Reloading finished in 230 ms. May 27 02:53:04.396443 kubelet[2290]: I0527 02:53:04.396347 2290 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 02:53:04.397594 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:53:04.412495 systemd[1]: kubelet.service: Deactivated successfully. May 27 02:53:04.414381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:53:04.414459 systemd[1]: kubelet.service: Consumed 1.357s CPU time, 131.7M memory peak. May 27 02:53:04.416145 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:53:04.545441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:53:04.555670 (kubelet)[2649]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 02:53:04.591296 kubelet[2649]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:53:04.591296 kubelet[2649]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 02:53:04.591296 kubelet[2649]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:53:04.591645 kubelet[2649]: I0527 02:53:04.591369 2649 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 02:53:04.599341 kubelet[2649]: I0527 02:53:04.597972 2649 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 27 02:53:04.599341 kubelet[2649]: I0527 02:53:04.597997 2649 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 02:53:04.599341 kubelet[2649]: I0527 02:53:04.598217 2649 server.go:954] "Client rotation is on, will bootstrap in background" May 27 02:53:04.599635 kubelet[2649]: I0527 02:53:04.599616 2649 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 02:53:04.601976 kubelet[2649]: I0527 02:53:04.601945 2649 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 02:53:04.605916 kubelet[2649]: I0527 02:53:04.605901 2649 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 02:53:04.608441 kubelet[2649]: I0527 02:53:04.608417 2649 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 02:53:04.608615 kubelet[2649]: I0527 02:53:04.608592 2649 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 02:53:04.608777 kubelet[2649]: I0527 02:53:04.608615 2649 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 02:53:04.608850 kubelet[2649]: I0527 02:53:04.608787 2649 topology_manager.go:138] "Creating topology manager with none policy" May 27 02:53:04.608850 kubelet[2649]: I0527 02:53:04.608797 2649 container_manager_linux.go:304] "Creating device plugin manager" May 27 02:53:04.608850 kubelet[2649]: I0527 02:53:04.608841 2649 state_mem.go:36] "Initialized new in-memory state store" May 27 02:53:04.608966 kubelet[2649]: I0527 02:53:04.608957 2649 kubelet.go:446] "Attempting to sync node with API server" May 27 02:53:04.608994 kubelet[2649]: I0527 02:53:04.608969 2649 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 02:53:04.608994 kubelet[2649]: I0527 02:53:04.608989 2649 kubelet.go:352] "Adding apiserver pod source" May 27 02:53:04.609045 kubelet[2649]: I0527 02:53:04.609001 2649 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 02:53:04.609636 kubelet[2649]: I0527 02:53:04.609496 2649 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 02:53:04.610641 kubelet[2649]: I0527 02:53:04.610596 2649 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 02:53:04.611051 kubelet[2649]: I0527 02:53:04.611025 2649 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 02:53:04.611051 kubelet[2649]: I0527 02:53:04.611055 2649 server.go:1287] "Started kubelet" May 27 02:53:04.613197 kubelet[2649]: I0527 02:53:04.613162 2649 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 02:53:04.614207 kubelet[2649]: I0527 02:53:04.614169 2649 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 27 02:53:04.614527 kubelet[2649]: I0527 02:53:04.614359 2649 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 02:53:04.614681 kubelet[2649]: I0527 02:53:04.614605 2649 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 02:53:04.615039 kubelet[2649]: I0527 02:53:04.615007 2649 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 02:53:04.616086 kubelet[2649]: I0527 02:53:04.616059 2649 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 02:53:04.616172 kubelet[2649]: E0527 02:53:04.616151 2649 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 02:53:04.616559 kubelet[2649]: I0527 02:53:04.616264 2649 factory.go:221] Registration of the systemd container factory successfully May 27 02:53:04.616675 kubelet[2649]: I0527 02:53:04.616654 2649 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 02:53:04.616774 kubelet[2649]: I0527 02:53:04.616759 2649 reconciler.go:26] "Reconciler: start to sync state" May 27 02:53:04.616905 kubelet[2649]: I0527 02:53:04.616875 2649 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 02:53:04.620784 kubelet[2649]: I0527 02:53:04.620760 2649 factory.go:221] Registration of the containerd container factory successfully May 27 02:53:04.629843 kubelet[2649]: I0527 02:53:04.625675 2649 server.go:479] "Adding debug handlers to kubelet server" May 27 02:53:04.630208 kubelet[2649]: I0527 02:53:04.630134 2649 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 02:53:04.633068 kubelet[2649]: I0527 02:53:04.632994 2649 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 02:53:04.633590 kubelet[2649]: I0527 02:53:04.633573 2649 status_manager.go:227] "Starting to sync pod status with apiserver" May 27 02:53:04.633681 kubelet[2649]: I0527 02:53:04.633666 2649 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 02:53:04.633739 kubelet[2649]: I0527 02:53:04.633731 2649 kubelet.go:2382] "Starting kubelet main sync loop" May 27 02:53:04.633839 kubelet[2649]: E0527 02:53:04.633821 2649 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 02:53:04.666452 kubelet[2649]: I0527 02:53:04.666351 2649 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 02:53:04.666452 kubelet[2649]: I0527 02:53:04.666375 2649 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 02:53:04.666452 kubelet[2649]: I0527 02:53:04.666396 2649 state_mem.go:36] "Initialized new in-memory state store" May 27 02:53:04.666588 kubelet[2649]: I0527 02:53:04.666552 2649 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 02:53:04.666588 kubelet[2649]: I0527 02:53:04.666563 2649 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 02:53:04.666588 kubelet[2649]: I0527 02:53:04.666579 2649 policy_none.go:49] "None policy: Start" May 27 02:53:04.666588 kubelet[2649]: I0527 02:53:04.666587 2649 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 02:53:04.666669 kubelet[2649]: I0527 02:53:04.666597 2649 state_mem.go:35] "Initializing new in-memory state store" May 27 02:53:04.666851 kubelet[2649]: I0527 02:53:04.666706 2649 state_mem.go:75] "Updated machine memory state" May 27 02:53:04.670838 kubelet[2649]: I0527 02:53:04.670803 2649 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 02:53:04.670982 kubelet[2649]: I0527 02:53:04.670955 2649 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 02:53:04.671026 kubelet[2649]: I0527 02:53:04.670971 2649 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 02:53:04.671534 kubelet[2649]: I0527 02:53:04.671515 2649 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 02:53:04.672396 kubelet[2649]: E0527 02:53:04.672371 2649 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 02:53:04.736450 kubelet[2649]: I0527 02:53:04.736393 2649 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 02:53:04.736602 kubelet[2649]: I0527 02:53:04.736551 2649 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 02:53:04.736629 kubelet[2649]: I0527 02:53:04.736600 2649 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 02:53:04.744503 kubelet[2649]: E0527 02:53:04.744341 2649 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 02:53:04.776760 kubelet[2649]: I0527 02:53:04.776739 2649 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 02:53:04.783148 kubelet[2649]: I0527 02:53:04.783122 2649 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 02:53:04.783219 kubelet[2649]: I0527 02:53:04.783186 2649 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 02:53:04.818553 kubelet[2649]: I0527 02:53:04.818519 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/447e79232307504a6964f3be51e3d64d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"447e79232307504a6964f3be51e3d64d\") " pod="kube-system/kube-scheduler-localhost" May 27 02:53:04.818553 kubelet[2649]: I0527 02:53:04.818553 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/36426eb75cee82d07816096475a8c81f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"36426eb75cee82d07816096475a8c81f\") " pod="kube-system/kube-apiserver-localhost" May 27 02:53:04.818660 kubelet[2649]: I0527 02:53:04.818575 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/36426eb75cee82d07816096475a8c81f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"36426eb75cee82d07816096475a8c81f\") " pod="kube-system/kube-apiserver-localhost" May 27 02:53:04.818660 kubelet[2649]: I0527 02:53:04.818595 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:53:04.818660 kubelet[2649]: I0527 02:53:04.818612 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:53:04.818660 kubelet[2649]: I0527 02:53:04.818628 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:53:04.818660 kubelet[2649]: I0527 02:53:04.818643 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:53:04.818769 kubelet[2649]: I0527 02:53:04.818657 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/36426eb75cee82d07816096475a8c81f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"36426eb75cee82d07816096475a8c81f\") " pod="kube-system/kube-apiserver-localhost" May 27 02:53:04.818769 kubelet[2649]: I0527 02:53:04.818670 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7c751acbcd1525da2f1a64e395f86bdd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"7c751acbcd1525da2f1a64e395f86bdd\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:53:05.609924 kubelet[2649]: I0527 02:53:05.609760 2649 apiserver.go:52] "Watching apiserver" May 27 02:53:05.617208 kubelet[2649]: I0527 02:53:05.617166 2649 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 02:53:05.654089 kubelet[2649]: I0527 02:53:05.654054 2649 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 02:53:05.656176 kubelet[2649]: I0527 02:53:05.654584 2649 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 02:53:05.683304 kubelet[2649]: I0527 02:53:05.682925 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6828895990000001 podStartE2EDuration="1.682889599s" podCreationTimestamp="2025-05-27 02:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:53:05.682676034 +0000 UTC m=+1.123820321" watchObservedRunningTime="2025-05-27 02:53:05.682889599 +0000 UTC m=+1.124033766" May 27 02:53:05.683304 kubelet[2649]: E0527 02:53:05.683109 2649 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 02:53:05.683706 kubelet[2649]: E0527 02:53:05.683688 2649 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 02:53:05.708864 kubelet[2649]: I0527 02:53:05.708800 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7087837179999998 podStartE2EDuration="1.708783718s" podCreationTimestamp="2025-05-27 02:53:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:53:05.694805049 +0000 UTC m=+1.135949296" watchObservedRunningTime="2025-05-27 02:53:05.708783718 +0000 UTC m=+1.149927925" May 27 02:53:05.717355 kubelet[2649]: I0527 02:53:05.717185 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.717170896 podStartE2EDuration="2.717170896s" podCreationTimestamp="2025-05-27 02:53:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:53:05.709047806 +0000 UTC m=+1.150192013" watchObservedRunningTime="2025-05-27 02:53:05.717170896 +0000 UTC m=+1.158315103" May 27 02:53:10.312948 kubelet[2649]: I0527 02:53:10.312913 2649 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 02:53:10.313283 containerd[1526]: time="2025-05-27T02:53:10.313200067Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 02:53:10.313518 kubelet[2649]: I0527 02:53:10.313394 2649 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 02:53:11.215607 systemd[1]: Created slice kubepods-besteffort-pod11cffce7_0fe2_4705_b77b_8398ed84fe83.slice - libcontainer container kubepods-besteffort-pod11cffce7_0fe2_4705_b77b_8398ed84fe83.slice. May 27 02:53:11.262396 kubelet[2649]: I0527 02:53:11.262344 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11cffce7-0fe2-4705-b77b-8398ed84fe83-lib-modules\") pod \"kube-proxy-j2ttj\" (UID: \"11cffce7-0fe2-4705-b77b-8398ed84fe83\") " pod="kube-system/kube-proxy-j2ttj" May 27 02:53:11.262396 kubelet[2649]: I0527 02:53:11.262387 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q69j\" (UniqueName: \"kubernetes.io/projected/11cffce7-0fe2-4705-b77b-8398ed84fe83-kube-api-access-8q69j\") pod \"kube-proxy-j2ttj\" (UID: \"11cffce7-0fe2-4705-b77b-8398ed84fe83\") " pod="kube-system/kube-proxy-j2ttj" May 27 02:53:11.262396 kubelet[2649]: I0527 02:53:11.262409 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/11cffce7-0fe2-4705-b77b-8398ed84fe83-kube-proxy\") pod \"kube-proxy-j2ttj\" (UID: \"11cffce7-0fe2-4705-b77b-8398ed84fe83\") " pod="kube-system/kube-proxy-j2ttj" May 27 02:53:11.262606 kubelet[2649]: I0527 02:53:11.262434 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11cffce7-0fe2-4705-b77b-8398ed84fe83-xtables-lock\") pod \"kube-proxy-j2ttj\" (UID: \"11cffce7-0fe2-4705-b77b-8398ed84fe83\") " pod="kube-system/kube-proxy-j2ttj" May 27 02:53:11.489894 systemd[1]: Created slice kubepods-besteffort-podb4a374da_2216_480f_ad28_bdffa5a4dad4.slice - libcontainer container kubepods-besteffort-podb4a374da_2216_480f_ad28_bdffa5a4dad4.slice. May 27 02:53:11.537571 containerd[1526]: time="2025-05-27T02:53:11.537526365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2ttj,Uid:11cffce7-0fe2-4705-b77b-8398ed84fe83,Namespace:kube-system,Attempt:0,}" May 27 02:53:11.554357 containerd[1526]: time="2025-05-27T02:53:11.554269795Z" level=info msg="connecting to shim b1479a88c057c5ea3b25e68bb70a5f70ef1e212cb245ae30e2f050e49f652861" address="unix:///run/containerd/s/9f92aade1fdede75a33afd9f0278d1c949df574433f335759f9b3fece96073c3" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:11.564922 kubelet[2649]: I0527 02:53:11.564030 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b4a374da-2216-480f-ad28-bdffa5a4dad4-var-lib-calico\") pod \"tigera-operator-844669ff44-g4464\" (UID: \"b4a374da-2216-480f-ad28-bdffa5a4dad4\") " pod="tigera-operator/tigera-operator-844669ff44-g4464" May 27 02:53:11.565488 kubelet[2649]: I0527 02:53:11.565386 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6njt4\" (UniqueName: \"kubernetes.io/projected/b4a374da-2216-480f-ad28-bdffa5a4dad4-kube-api-access-6njt4\") pod \"tigera-operator-844669ff44-g4464\" (UID: \"b4a374da-2216-480f-ad28-bdffa5a4dad4\") " pod="tigera-operator/tigera-operator-844669ff44-g4464" May 27 02:53:11.580478 systemd[1]: Started cri-containerd-b1479a88c057c5ea3b25e68bb70a5f70ef1e212cb245ae30e2f050e49f652861.scope - libcontainer container b1479a88c057c5ea3b25e68bb70a5f70ef1e212cb245ae30e2f050e49f652861. May 27 02:53:11.603596 containerd[1526]: time="2025-05-27T02:53:11.603547080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j2ttj,Uid:11cffce7-0fe2-4705-b77b-8398ed84fe83,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1479a88c057c5ea3b25e68bb70a5f70ef1e212cb245ae30e2f050e49f652861\"" May 27 02:53:11.608146 containerd[1526]: time="2025-05-27T02:53:11.608101477Z" level=info msg="CreateContainer within sandbox \"b1479a88c057c5ea3b25e68bb70a5f70ef1e212cb245ae30e2f050e49f652861\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 02:53:11.622627 containerd[1526]: time="2025-05-27T02:53:11.622389735Z" level=info msg="Container f229083b7b7624483d4f5b0bbed7c4af6978f20edcbd0a69705bc8dbd2f22664: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:11.631043 containerd[1526]: time="2025-05-27T02:53:11.630998340Z" level=info msg="CreateContainer within sandbox \"b1479a88c057c5ea3b25e68bb70a5f70ef1e212cb245ae30e2f050e49f652861\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f229083b7b7624483d4f5b0bbed7c4af6978f20edcbd0a69705bc8dbd2f22664\"" May 27 02:53:11.631571 containerd[1526]: time="2025-05-27T02:53:11.631542983Z" level=info msg="StartContainer for \"f229083b7b7624483d4f5b0bbed7c4af6978f20edcbd0a69705bc8dbd2f22664\"" May 27 02:53:11.633187 containerd[1526]: time="2025-05-27T02:53:11.633152062Z" level=info msg="connecting to shim f229083b7b7624483d4f5b0bbed7c4af6978f20edcbd0a69705bc8dbd2f22664" address="unix:///run/containerd/s/9f92aade1fdede75a33afd9f0278d1c949df574433f335759f9b3fece96073c3" protocol=ttrpc version=3 May 27 02:53:11.653480 systemd[1]: Started cri-containerd-f229083b7b7624483d4f5b0bbed7c4af6978f20edcbd0a69705bc8dbd2f22664.scope - libcontainer container f229083b7b7624483d4f5b0bbed7c4af6978f20edcbd0a69705bc8dbd2f22664. May 27 02:53:11.708769 containerd[1526]: time="2025-05-27T02:53:11.708722342Z" level=info msg="StartContainer for \"f229083b7b7624483d4f5b0bbed7c4af6978f20edcbd0a69705bc8dbd2f22664\" returns successfully" May 27 02:53:11.793402 containerd[1526]: time="2025-05-27T02:53:11.793147141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-g4464,Uid:b4a374da-2216-480f-ad28-bdffa5a4dad4,Namespace:tigera-operator,Attempt:0,}" May 27 02:53:11.814919 containerd[1526]: time="2025-05-27T02:53:11.814800634Z" level=info msg="connecting to shim 4ee65e0c43295ceee4783e2049662b28f117517a521e5e4c55cb055e5e564aae" address="unix:///run/containerd/s/4205c44873ed38b00cd7ea1287610e784557374a9e0d0027ba9f0a61292fc434" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:11.846642 systemd[1]: Started cri-containerd-4ee65e0c43295ceee4783e2049662b28f117517a521e5e4c55cb055e5e564aae.scope - libcontainer container 4ee65e0c43295ceee4783e2049662b28f117517a521e5e4c55cb055e5e564aae. May 27 02:53:11.893265 containerd[1526]: time="2025-05-27T02:53:11.893195595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-844669ff44-g4464,Uid:b4a374da-2216-480f-ad28-bdffa5a4dad4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4ee65e0c43295ceee4783e2049662b28f117517a521e5e4c55cb055e5e564aae\"" May 27 02:53:11.896441 containerd[1526]: time="2025-05-27T02:53:11.896399830Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\"" May 27 02:53:13.026063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3522254831.mount: Deactivated successfully. May 27 02:53:14.149845 containerd[1526]: time="2025-05-27T02:53:14.149788485Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:14.150644 containerd[1526]: time="2025-05-27T02:53:14.150476339Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.0: active requests=0, bytes read=22143480" May 27 02:53:14.151484 containerd[1526]: time="2025-05-27T02:53:14.151451146Z" level=info msg="ImageCreate event name:\"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:14.153327 containerd[1526]: time="2025-05-27T02:53:14.153286651Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:14.154085 containerd[1526]: time="2025-05-27T02:53:14.154052685Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.0\" with image id \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\", repo tag \"quay.io/tigera/operator:v1.38.0\", repo digest \"quay.io/tigera/operator@sha256:e0a34b265aebce1a2db906d8dad99190706e8bf3910cae626b9c2eb6bbb21775\", size \"22139475\" in 2.257615964s" May 27 02:53:14.154085 containerd[1526]: time="2025-05-27T02:53:14.154084253Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.0\" returns image reference \"sha256:171854d50ba608218142ad5d32c7dd12ce55d536f02872e56e7c04c1f0a96a6b\"" May 27 02:53:14.158502 containerd[1526]: time="2025-05-27T02:53:14.158475365Z" level=info msg="CreateContainer within sandbox \"4ee65e0c43295ceee4783e2049662b28f117517a521e5e4c55cb055e5e564aae\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" May 27 02:53:14.165030 containerd[1526]: time="2025-05-27T02:53:14.164450998Z" level=info msg="Container 3495713915cbb0bba227aae2b22808a7c2599bd6cda1edf7a488f6936fb64182: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:14.172648 containerd[1526]: time="2025-05-27T02:53:14.172557211Z" level=info msg="CreateContainer within sandbox \"4ee65e0c43295ceee4783e2049662b28f117517a521e5e4c55cb055e5e564aae\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3495713915cbb0bba227aae2b22808a7c2599bd6cda1edf7a488f6936fb64182\"" May 27 02:53:14.174490 containerd[1526]: time="2025-05-27T02:53:14.174372751Z" level=info msg="StartContainer for \"3495713915cbb0bba227aae2b22808a7c2599bd6cda1edf7a488f6936fb64182\"" May 27 02:53:14.175369 containerd[1526]: time="2025-05-27T02:53:14.175340116Z" level=info msg="connecting to shim 3495713915cbb0bba227aae2b22808a7c2599bd6cda1edf7a488f6936fb64182" address="unix:///run/containerd/s/4205c44873ed38b00cd7ea1287610e784557374a9e0d0027ba9f0a61292fc434" protocol=ttrpc version=3 May 27 02:53:14.208494 systemd[1]: Started cri-containerd-3495713915cbb0bba227aae2b22808a7c2599bd6cda1edf7a488f6936fb64182.scope - libcontainer container 3495713915cbb0bba227aae2b22808a7c2599bd6cda1edf7a488f6936fb64182. May 27 02:53:14.248407 containerd[1526]: time="2025-05-27T02:53:14.248362211Z" level=info msg="StartContainer for \"3495713915cbb0bba227aae2b22808a7c2599bd6cda1edf7a488f6936fb64182\" returns successfully" May 27 02:53:14.684672 kubelet[2649]: I0527 02:53:14.684574 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j2ttj" podStartSLOduration=3.68455557 podStartE2EDuration="3.68455557s" podCreationTimestamp="2025-05-27 02:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:53:12.679770023 +0000 UTC m=+8.120914190" watchObservedRunningTime="2025-05-27 02:53:14.68455557 +0000 UTC m=+10.125699777" May 27 02:53:16.831797 kubelet[2649]: I0527 02:53:16.831652 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-844669ff44-g4464" podStartSLOduration=3.57002908 podStartE2EDuration="5.831634675s" podCreationTimestamp="2025-05-27 02:53:11 +0000 UTC" firstStartedPulling="2025-05-27 02:53:11.89505487 +0000 UTC m=+7.336199077" lastFinishedPulling="2025-05-27 02:53:14.156660465 +0000 UTC m=+9.597804672" observedRunningTime="2025-05-27 02:53:14.685038492 +0000 UTC m=+10.126182699" watchObservedRunningTime="2025-05-27 02:53:16.831634675 +0000 UTC m=+12.272778922" May 27 02:53:18.497271 update_engine[1514]: I20250527 02:53:18.497036 1514 update_attempter.cc:509] Updating boot flags... May 27 02:53:19.458770 sudo[1728]: pam_unix(sudo:session): session closed for user root May 27 02:53:19.470176 sshd[1727]: Connection closed by 10.0.0.1 port 36588 May 27 02:53:19.470596 sshd-session[1725]: pam_unix(sshd:session): session closed for user core May 27 02:53:19.477060 systemd[1]: sshd@6-10.0.0.73:22-10.0.0.1:36588.service: Deactivated successfully. May 27 02:53:19.480512 systemd[1]: session-7.scope: Deactivated successfully. May 27 02:53:19.480710 systemd[1]: session-7.scope: Consumed 6.830s CPU time, 222.2M memory peak. May 27 02:53:19.481675 systemd-logind[1506]: Session 7 logged out. Waiting for processes to exit. May 27 02:53:19.483027 systemd-logind[1506]: Removed session 7. May 27 02:53:23.012262 systemd[1]: Created slice kubepods-besteffort-podea21bc2b_111e_4368_be41_c15da846271b.slice - libcontainer container kubepods-besteffort-podea21bc2b_111e_4368_be41_c15da846271b.slice. May 27 02:53:23.046334 kubelet[2649]: I0527 02:53:23.046279 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea21bc2b-111e-4368-be41-c15da846271b-tigera-ca-bundle\") pod \"calico-typha-667887fcf4-pcw9z\" (UID: \"ea21bc2b-111e-4368-be41-c15da846271b\") " pod="calico-system/calico-typha-667887fcf4-pcw9z" May 27 02:53:23.046649 kubelet[2649]: I0527 02:53:23.046360 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gkl9\" (UniqueName: \"kubernetes.io/projected/ea21bc2b-111e-4368-be41-c15da846271b-kube-api-access-2gkl9\") pod \"calico-typha-667887fcf4-pcw9z\" (UID: \"ea21bc2b-111e-4368-be41-c15da846271b\") " pod="calico-system/calico-typha-667887fcf4-pcw9z" May 27 02:53:23.049347 kubelet[2649]: I0527 02:53:23.049318 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ea21bc2b-111e-4368-be41-c15da846271b-typha-certs\") pod \"calico-typha-667887fcf4-pcw9z\" (UID: \"ea21bc2b-111e-4368-be41-c15da846271b\") " pod="calico-system/calico-typha-667887fcf4-pcw9z" May 27 02:53:23.331624 containerd[1526]: time="2025-05-27T02:53:23.331457305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-667887fcf4-pcw9z,Uid:ea21bc2b-111e-4368-be41-c15da846271b,Namespace:calico-system,Attempt:0,}" May 27 02:53:23.372121 containerd[1526]: time="2025-05-27T02:53:23.371812283Z" level=info msg="connecting to shim a3efd10c804f6a12e68595c87fc3306351e6e335b138fffb8c93c750ba5dee2a" address="unix:///run/containerd/s/7cdd1e80096b19acde880e5e839b6d5e26d38eff009bdeaf727cb609e02af5f2" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:23.395272 systemd[1]: Created slice kubepods-besteffort-pod76a081e7_fc59_44c2_9fa0_6a0990ea46ec.slice - libcontainer container kubepods-besteffort-pod76a081e7_fc59_44c2_9fa0_6a0990ea46ec.slice. May 27 02:53:23.408506 systemd[1]: Started cri-containerd-a3efd10c804f6a12e68595c87fc3306351e6e335b138fffb8c93c750ba5dee2a.scope - libcontainer container a3efd10c804f6a12e68595c87fc3306351e6e335b138fffb8c93c750ba5dee2a. May 27 02:53:23.451580 kubelet[2649]: I0527 02:53:23.451504 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shwnk\" (UniqueName: \"kubernetes.io/projected/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-kube-api-access-shwnk\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.451580 kubelet[2649]: I0527 02:53:23.451571 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-var-lib-calico\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.465831 kubelet[2649]: I0527 02:53:23.451604 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-xtables-lock\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.465831 kubelet[2649]: I0527 02:53:23.451633 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-flexvol-driver-host\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.465831 kubelet[2649]: I0527 02:53:23.451657 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-tigera-ca-bundle\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.465831 kubelet[2649]: I0527 02:53:23.451714 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-cni-net-dir\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.465831 kubelet[2649]: I0527 02:53:23.451754 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-lib-modules\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.466644 kubelet[2649]: I0527 02:53:23.451774 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-cni-bin-dir\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.466644 kubelet[2649]: I0527 02:53:23.451797 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-var-run-calico\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.466644 kubelet[2649]: I0527 02:53:23.451816 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-policysync\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.466644 kubelet[2649]: I0527 02:53:23.451833 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-node-certs\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.466644 kubelet[2649]: I0527 02:53:23.451878 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/76a081e7-fc59-44c2-9fa0-6a0990ea46ec-cni-log-dir\") pod \"calico-node-zm44g\" (UID: \"76a081e7-fc59-44c2-9fa0-6a0990ea46ec\") " pod="calico-system/calico-node-zm44g" May 27 02:53:23.472536 containerd[1526]: time="2025-05-27T02:53:23.472472093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-667887fcf4-pcw9z,Uid:ea21bc2b-111e-4368-be41-c15da846271b,Namespace:calico-system,Attempt:0,} returns sandbox id \"a3efd10c804f6a12e68595c87fc3306351e6e335b138fffb8c93c750ba5dee2a\"" May 27 02:53:23.481295 containerd[1526]: time="2025-05-27T02:53:23.481265349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\"" May 27 02:53:23.574119 kubelet[2649]: E0527 02:53:23.574080 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.574119 kubelet[2649]: W0527 02:53:23.574102 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.574119 kubelet[2649]: E0527 02:53:23.574121 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.671230 kubelet[2649]: E0527 02:53:23.671121 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jq8vl" podUID="3e08f79c-1eaa-47ef-a241-1357b91487af" May 27 02:53:23.699873 containerd[1526]: time="2025-05-27T02:53:23.699801420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zm44g,Uid:76a081e7-fc59-44c2-9fa0-6a0990ea46ec,Namespace:calico-system,Attempt:0,}" May 27 02:53:23.716367 containerd[1526]: time="2025-05-27T02:53:23.716324481Z" level=info msg="connecting to shim bf1873b8fa060c83ead89e3b458aac7cf9a807a0c2c75792f537ded4c805e4d1" address="unix:///run/containerd/s/86e6334c6101e5e546fb31c1babd88dd7c3936d0ee94e0ecfe059b6b4ec75ba1" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:23.739339 kubelet[2649]: E0527 02:53:23.739212 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.739517 kubelet[2649]: W0527 02:53:23.739233 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.739661 kubelet[2649]: E0527 02:53:23.739588 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.739919 kubelet[2649]: E0527 02:53:23.739905 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.740039 kubelet[2649]: W0527 02:53:23.739989 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.740177 kubelet[2649]: E0527 02:53:23.740068 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.740379 kubelet[2649]: E0527 02:53:23.740363 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.740457 kubelet[2649]: W0527 02:53:23.740444 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.740528 kubelet[2649]: E0527 02:53:23.740518 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.740755 kubelet[2649]: E0527 02:53:23.740742 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.740826 kubelet[2649]: W0527 02:53:23.740814 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.740892 kubelet[2649]: E0527 02:53:23.740881 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.741093 kubelet[2649]: E0527 02:53:23.741080 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.741093 kubelet[2649]: W0527 02:53:23.741123 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.741093 kubelet[2649]: E0527 02:53:23.741138 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.741496 kubelet[2649]: E0527 02:53:23.741482 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.741527 systemd[1]: Started cri-containerd-bf1873b8fa060c83ead89e3b458aac7cf9a807a0c2c75792f537ded4c805e4d1.scope - libcontainer container bf1873b8fa060c83ead89e3b458aac7cf9a807a0c2c75792f537ded4c805e4d1. May 27 02:53:23.741823 kubelet[2649]: W0527 02:53:23.741650 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.741823 kubelet[2649]: E0527 02:53:23.741669 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.742135 kubelet[2649]: E0527 02:53:23.742115 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.742205 kubelet[2649]: W0527 02:53:23.742194 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.742262 kubelet[2649]: E0527 02:53:23.742251 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.743031 kubelet[2649]: E0527 02:53:23.742935 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.743031 kubelet[2649]: W0527 02:53:23.742947 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.743777 kubelet[2649]: E0527 02:53:23.743337 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.743914 kubelet[2649]: E0527 02:53:23.743901 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.743997 kubelet[2649]: W0527 02:53:23.743986 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.744073 kubelet[2649]: E0527 02:53:23.744062 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.744378 kubelet[2649]: E0527 02:53:23.744362 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.744534 kubelet[2649]: W0527 02:53:23.744409 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.744534 kubelet[2649]: E0527 02:53:23.744423 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.744749 kubelet[2649]: E0527 02:53:23.744736 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.744830 kubelet[2649]: W0527 02:53:23.744818 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.744905 kubelet[2649]: E0527 02:53:23.744891 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.745360 kubelet[2649]: E0527 02:53:23.745118 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.745452 kubelet[2649]: W0527 02:53:23.745435 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.745508 kubelet[2649]: E0527 02:53:23.745498 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.745782 kubelet[2649]: E0527 02:53:23.745768 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.745875 kubelet[2649]: W0527 02:53:23.745861 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.746113 kubelet[2649]: E0527 02:53:23.745932 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.746421 kubelet[2649]: E0527 02:53:23.746406 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.746785 kubelet[2649]: W0527 02:53:23.746508 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.746785 kubelet[2649]: E0527 02:53:23.746525 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.747054 kubelet[2649]: E0527 02:53:23.746991 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.747054 kubelet[2649]: W0527 02:53:23.747005 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.747054 kubelet[2649]: E0527 02:53:23.747017 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.747503 kubelet[2649]: E0527 02:53:23.747482 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.747663 kubelet[2649]: W0527 02:53:23.747568 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.747663 kubelet[2649]: E0527 02:53:23.747632 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.748080 kubelet[2649]: E0527 02:53:23.747976 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.748080 kubelet[2649]: W0527 02:53:23.747990 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.748080 kubelet[2649]: E0527 02:53:23.748000 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.748287 kubelet[2649]: E0527 02:53:23.748275 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.748394 kubelet[2649]: W0527 02:53:23.748381 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.748466 kubelet[2649]: E0527 02:53:23.748456 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.748753 kubelet[2649]: E0527 02:53:23.748673 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.748753 kubelet[2649]: W0527 02:53:23.748686 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.748753 kubelet[2649]: E0527 02:53:23.748697 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.749255 kubelet[2649]: E0527 02:53:23.749241 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.749362 kubelet[2649]: W0527 02:53:23.749349 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.749439 kubelet[2649]: E0527 02:53:23.749420 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.755500 kubelet[2649]: E0527 02:53:23.755480 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.755500 kubelet[2649]: W0527 02:53:23.755496 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.755690 kubelet[2649]: E0527 02:53:23.755510 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.755690 kubelet[2649]: I0527 02:53:23.755543 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3e08f79c-1eaa-47ef-a241-1357b91487af-registration-dir\") pod \"csi-node-driver-jq8vl\" (UID: \"3e08f79c-1eaa-47ef-a241-1357b91487af\") " pod="calico-system/csi-node-driver-jq8vl" May 27 02:53:23.756401 kubelet[2649]: E0527 02:53:23.756382 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.756401 kubelet[2649]: W0527 02:53:23.756400 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.756470 kubelet[2649]: E0527 02:53:23.756422 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.756470 kubelet[2649]: I0527 02:53:23.756443 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3e08f79c-1eaa-47ef-a241-1357b91487af-socket-dir\") pod \"csi-node-driver-jq8vl\" (UID: \"3e08f79c-1eaa-47ef-a241-1357b91487af\") " pod="calico-system/csi-node-driver-jq8vl" May 27 02:53:23.757109 kubelet[2649]: E0527 02:53:23.757084 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.757109 kubelet[2649]: W0527 02:53:23.757104 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.757406 kubelet[2649]: E0527 02:53:23.757123 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.757865 kubelet[2649]: E0527 02:53:23.757828 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.757865 kubelet[2649]: W0527 02:53:23.757854 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.758081 kubelet[2649]: E0527 02:53:23.757873 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.758397 kubelet[2649]: E0527 02:53:23.758381 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.758397 kubelet[2649]: W0527 02:53:23.758396 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.758447 kubelet[2649]: E0527 02:53:23.758413 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.758447 kubelet[2649]: I0527 02:53:23.758433 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e08f79c-1eaa-47ef-a241-1357b91487af-kubelet-dir\") pod \"csi-node-driver-jq8vl\" (UID: \"3e08f79c-1eaa-47ef-a241-1357b91487af\") " pod="calico-system/csi-node-driver-jq8vl" May 27 02:53:23.758901 kubelet[2649]: E0527 02:53:23.758882 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.758957 kubelet[2649]: W0527 02:53:23.758900 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.758957 kubelet[2649]: E0527 02:53:23.758919 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.759664 kubelet[2649]: E0527 02:53:23.759648 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.759664 kubelet[2649]: W0527 02:53:23.759663 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.759718 kubelet[2649]: E0527 02:53:23.759680 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.760785 kubelet[2649]: E0527 02:53:23.760769 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.760785 kubelet[2649]: W0527 02:53:23.760783 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.760961 kubelet[2649]: E0527 02:53:23.760798 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.760961 kubelet[2649]: I0527 02:53:23.760817 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gfph\" (UniqueName: \"kubernetes.io/projected/3e08f79c-1eaa-47ef-a241-1357b91487af-kube-api-access-9gfph\") pod \"csi-node-driver-jq8vl\" (UID: \"3e08f79c-1eaa-47ef-a241-1357b91487af\") " pod="calico-system/csi-node-driver-jq8vl" May 27 02:53:23.761272 kubelet[2649]: E0527 02:53:23.761249 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.761272 kubelet[2649]: W0527 02:53:23.761267 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.761511 kubelet[2649]: E0527 02:53:23.761285 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.763455 kubelet[2649]: E0527 02:53:23.763407 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.763455 kubelet[2649]: W0527 02:53:23.763448 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.763531 kubelet[2649]: E0527 02:53:23.763468 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.763897 kubelet[2649]: E0527 02:53:23.763641 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.763897 kubelet[2649]: W0527 02:53:23.763656 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.763897 kubelet[2649]: E0527 02:53:23.763669 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.763897 kubelet[2649]: I0527 02:53:23.763686 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3e08f79c-1eaa-47ef-a241-1357b91487af-varrun\") pod \"csi-node-driver-jq8vl\" (UID: \"3e08f79c-1eaa-47ef-a241-1357b91487af\") " pod="calico-system/csi-node-driver-jq8vl" May 27 02:53:23.764413 kubelet[2649]: E0527 02:53:23.764165 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.764413 kubelet[2649]: W0527 02:53:23.764203 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.764413 kubelet[2649]: E0527 02:53:23.764224 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.764986 kubelet[2649]: E0527 02:53:23.764956 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.764986 kubelet[2649]: W0527 02:53:23.764981 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.765081 kubelet[2649]: E0527 02:53:23.764999 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.765558 kubelet[2649]: E0527 02:53:23.765538 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.765558 kubelet[2649]: W0527 02:53:23.765553 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.765745 kubelet[2649]: E0527 02:53:23.765565 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.765874 kubelet[2649]: E0527 02:53:23.765843 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.765874 kubelet[2649]: W0527 02:53:23.765859 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.765936 kubelet[2649]: E0527 02:53:23.765885 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.802250 containerd[1526]: time="2025-05-27T02:53:23.802199710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zm44g,Uid:76a081e7-fc59-44c2-9fa0-6a0990ea46ec,Namespace:calico-system,Attempt:0,} returns sandbox id \"bf1873b8fa060c83ead89e3b458aac7cf9a807a0c2c75792f537ded4c805e4d1\"" May 27 02:53:23.864929 kubelet[2649]: E0527 02:53:23.864889 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.864929 kubelet[2649]: W0527 02:53:23.864915 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.864929 kubelet[2649]: E0527 02:53:23.864936 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.865184 kubelet[2649]: E0527 02:53:23.865158 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.865184 kubelet[2649]: W0527 02:53:23.865173 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.865184 kubelet[2649]: E0527 02:53:23.865191 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.865399 kubelet[2649]: E0527 02:53:23.865376 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.865399 kubelet[2649]: W0527 02:53:23.865388 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.865399 kubelet[2649]: E0527 02:53:23.865402 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.865589 kubelet[2649]: E0527 02:53:23.865576 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.865589 kubelet[2649]: W0527 02:53:23.865587 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.865645 kubelet[2649]: E0527 02:53:23.865605 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.865813 kubelet[2649]: E0527 02:53:23.865786 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.865813 kubelet[2649]: W0527 02:53:23.865800 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.865813 kubelet[2649]: E0527 02:53:23.865813 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.866039 kubelet[2649]: E0527 02:53:23.866021 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.866072 kubelet[2649]: W0527 02:53:23.866039 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.866072 kubelet[2649]: E0527 02:53:23.866060 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.866224 kubelet[2649]: E0527 02:53:23.866213 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.866224 kubelet[2649]: W0527 02:53:23.866223 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.866273 kubelet[2649]: E0527 02:53:23.866240 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.866388 kubelet[2649]: E0527 02:53:23.866377 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.866388 kubelet[2649]: W0527 02:53:23.866387 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.866430 kubelet[2649]: E0527 02:53:23.866409 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.866617 kubelet[2649]: E0527 02:53:23.866601 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.866644 kubelet[2649]: W0527 02:53:23.866617 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.866644 kubelet[2649]: E0527 02:53:23.866634 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.866846 kubelet[2649]: E0527 02:53:23.866821 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.866878 kubelet[2649]: W0527 02:53:23.866845 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.866878 kubelet[2649]: E0527 02:53:23.866871 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.867035 kubelet[2649]: E0527 02:53:23.867023 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.867056 kubelet[2649]: W0527 02:53:23.867034 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.867056 kubelet[2649]: E0527 02:53:23.867047 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.867191 kubelet[2649]: E0527 02:53:23.867181 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.867211 kubelet[2649]: W0527 02:53:23.867190 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.867230 kubelet[2649]: E0527 02:53:23.867207 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.867354 kubelet[2649]: E0527 02:53:23.867343 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.867354 kubelet[2649]: W0527 02:53:23.867353 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.867409 kubelet[2649]: E0527 02:53:23.867371 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.867490 kubelet[2649]: E0527 02:53:23.867479 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.867511 kubelet[2649]: W0527 02:53:23.867489 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.867534 kubelet[2649]: E0527 02:53:23.867508 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.867623 kubelet[2649]: E0527 02:53:23.867614 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.867643 kubelet[2649]: W0527 02:53:23.867623 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.867667 kubelet[2649]: E0527 02:53:23.867639 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.867762 kubelet[2649]: E0527 02:53:23.867752 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.867784 kubelet[2649]: W0527 02:53:23.867762 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.867784 kubelet[2649]: E0527 02:53:23.867774 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.867915 kubelet[2649]: E0527 02:53:23.867904 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.867942 kubelet[2649]: W0527 02:53:23.867914 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.867942 kubelet[2649]: E0527 02:53:23.867937 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.868063 kubelet[2649]: E0527 02:53:23.868054 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.868086 kubelet[2649]: W0527 02:53:23.868065 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.868086 kubelet[2649]: E0527 02:53:23.868077 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.868231 kubelet[2649]: E0527 02:53:23.868221 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.868253 kubelet[2649]: W0527 02:53:23.868231 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.868253 kubelet[2649]: E0527 02:53:23.868244 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.868393 kubelet[2649]: E0527 02:53:23.868383 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.868421 kubelet[2649]: W0527 02:53:23.868393 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.868421 kubelet[2649]: E0527 02:53:23.868405 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.868527 kubelet[2649]: E0527 02:53:23.868518 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.868549 kubelet[2649]: W0527 02:53:23.868526 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.868549 kubelet[2649]: E0527 02:53:23.868537 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.868707 kubelet[2649]: E0527 02:53:23.868697 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.868727 kubelet[2649]: W0527 02:53:23.868707 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.868727 kubelet[2649]: E0527 02:53:23.868719 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.868849 kubelet[2649]: E0527 02:53:23.868839 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.868874 kubelet[2649]: W0527 02:53:23.868849 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.868874 kubelet[2649]: E0527 02:53:23.868865 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.868990 kubelet[2649]: E0527 02:53:23.868978 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.868990 kubelet[2649]: W0527 02:53:23.868988 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.869030 kubelet[2649]: E0527 02:53:23.868995 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.869136 kubelet[2649]: E0527 02:53:23.869127 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.869160 kubelet[2649]: W0527 02:53:23.869136 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.869160 kubelet[2649]: E0527 02:53:23.869144 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:23.878682 kubelet[2649]: E0527 02:53:23.878624 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:23.878682 kubelet[2649]: W0527 02:53:23.878642 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:23.878682 kubelet[2649]: E0527 02:53:23.878656 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:24.546127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1961740570.mount: Deactivated successfully. May 27 02:53:25.316352 containerd[1526]: time="2025-05-27T02:53:25.316025571Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:25.316677 containerd[1526]: time="2025-05-27T02:53:25.316472076Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.0: active requests=0, bytes read=33020269" May 27 02:53:25.317245 containerd[1526]: time="2025-05-27T02:53:25.317205864Z" level=info msg="ImageCreate event name:\"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:25.319429 containerd[1526]: time="2025-05-27T02:53:25.319325695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:25.320174 containerd[1526]: time="2025-05-27T02:53:25.320132494Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.0\" with image id \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d282f6c773c4631b9dc8379eb093c54ca34c7728d55d6509cb45da5e1f5baf8f\", size \"33020123\" in 1.838800895s" May 27 02:53:25.320174 containerd[1526]: time="2025-05-27T02:53:25.320160378Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.0\" returns image reference \"sha256:05ca98cdd7b8267a0dc5550048c0a195c8d42f85d92f090a669493485d8a6beb\"" May 27 02:53:25.329898 containerd[1526]: time="2025-05-27T02:53:25.328549730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 27 02:53:25.348060 containerd[1526]: time="2025-05-27T02:53:25.348016669Z" level=info msg="CreateContainer within sandbox \"a3efd10c804f6a12e68595c87fc3306351e6e335b138fffb8c93c750ba5dee2a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" May 27 02:53:25.355355 containerd[1526]: time="2025-05-27T02:53:25.355165478Z" level=info msg="Container 3a6c5533f511546131ef9110398131e2e77506fbfe67386d7d786785a1366285: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:25.361568 containerd[1526]: time="2025-05-27T02:53:25.361528293Z" level=info msg="CreateContainer within sandbox \"a3efd10c804f6a12e68595c87fc3306351e6e335b138fffb8c93c750ba5dee2a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3a6c5533f511546131ef9110398131e2e77506fbfe67386d7d786785a1366285\"" May 27 02:53:25.363008 containerd[1526]: time="2025-05-27T02:53:25.362978426Z" level=info msg="StartContainer for \"3a6c5533f511546131ef9110398131e2e77506fbfe67386d7d786785a1366285\"" May 27 02:53:25.363998 containerd[1526]: time="2025-05-27T02:53:25.363968931Z" level=info msg="connecting to shim 3a6c5533f511546131ef9110398131e2e77506fbfe67386d7d786785a1366285" address="unix:///run/containerd/s/7cdd1e80096b19acde880e5e839b6d5e26d38eff009bdeaf727cb609e02af5f2" protocol=ttrpc version=3 May 27 02:53:25.385458 systemd[1]: Started cri-containerd-3a6c5533f511546131ef9110398131e2e77506fbfe67386d7d786785a1366285.scope - libcontainer container 3a6c5533f511546131ef9110398131e2e77506fbfe67386d7d786785a1366285. May 27 02:53:25.417607 containerd[1526]: time="2025-05-27T02:53:25.417532037Z" level=info msg="StartContainer for \"3a6c5533f511546131ef9110398131e2e77506fbfe67386d7d786785a1366285\" returns successfully" May 27 02:53:25.636948 kubelet[2649]: E0527 02:53:25.636372 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jq8vl" podUID="3e08f79c-1eaa-47ef-a241-1357b91487af" May 27 02:53:25.732177 kubelet[2649]: I0527 02:53:25.732119 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-667887fcf4-pcw9z" podStartSLOduration=1.884985789 podStartE2EDuration="3.73210203s" podCreationTimestamp="2025-05-27 02:53:22 +0000 UTC" firstStartedPulling="2025-05-27 02:53:23.480968061 +0000 UTC m=+18.922112268" lastFinishedPulling="2025-05-27 02:53:25.328084222 +0000 UTC m=+20.769228509" observedRunningTime="2025-05-27 02:53:25.731455815 +0000 UTC m=+21.172600022" watchObservedRunningTime="2025-05-27 02:53:25.73210203 +0000 UTC m=+21.173246237" May 27 02:53:25.762499 kubelet[2649]: E0527 02:53:25.762477 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.762711 kubelet[2649]: W0527 02:53:25.762596 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.762711 kubelet[2649]: E0527 02:53:25.762619 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.762861 kubelet[2649]: E0527 02:53:25.762851 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.762939 kubelet[2649]: W0527 02:53:25.762898 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.762987 kubelet[2649]: E0527 02:53:25.762977 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.763250 kubelet[2649]: E0527 02:53:25.763189 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.763250 kubelet[2649]: W0527 02:53:25.763201 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.763250 kubelet[2649]: E0527 02:53:25.763211 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.763576 kubelet[2649]: E0527 02:53:25.763508 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.763576 kubelet[2649]: W0527 02:53:25.763521 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.763576 kubelet[2649]: E0527 02:53:25.763531 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.763790 kubelet[2649]: E0527 02:53:25.763780 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.763902 kubelet[2649]: W0527 02:53:25.763848 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.763902 kubelet[2649]: E0527 02:53:25.763865 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.764092 kubelet[2649]: E0527 02:53:25.764082 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.764200 kubelet[2649]: W0527 02:53:25.764148 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.764200 kubelet[2649]: E0527 02:53:25.764163 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.764412 kubelet[2649]: E0527 02:53:25.764401 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.764526 kubelet[2649]: W0527 02:53:25.764472 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.764526 kubelet[2649]: E0527 02:53:25.764489 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.764770 kubelet[2649]: E0527 02:53:25.764705 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.764770 kubelet[2649]: W0527 02:53:25.764717 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.764770 kubelet[2649]: E0527 02:53:25.764726 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.765020 kubelet[2649]: E0527 02:53:25.764968 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.765020 kubelet[2649]: W0527 02:53:25.764979 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.765020 kubelet[2649]: E0527 02:53:25.764989 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.765243 kubelet[2649]: E0527 02:53:25.765233 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.765366 kubelet[2649]: W0527 02:53:25.765299 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.765366 kubelet[2649]: E0527 02:53:25.765331 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.765578 kubelet[2649]: E0527 02:53:25.765567 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.765730 kubelet[2649]: W0527 02:53:25.765640 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.765730 kubelet[2649]: E0527 02:53:25.765655 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.765835 kubelet[2649]: E0527 02:53:25.765826 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.765880 kubelet[2649]: W0527 02:53:25.765871 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.765926 kubelet[2649]: E0527 02:53:25.765916 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.766201 kubelet[2649]: E0527 02:53:25.766130 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.766201 kubelet[2649]: W0527 02:53:25.766151 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.766201 kubelet[2649]: E0527 02:53:25.766162 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.766486 kubelet[2649]: E0527 02:53:25.766431 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.766486 kubelet[2649]: W0527 02:53:25.766442 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.766486 kubelet[2649]: E0527 02:53:25.766452 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.766816 kubelet[2649]: E0527 02:53:25.766747 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.766816 kubelet[2649]: W0527 02:53:25.766758 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.766816 kubelet[2649]: E0527 02:53:25.766768 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.778231 kubelet[2649]: E0527 02:53:25.778155 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.778231 kubelet[2649]: W0527 02:53:25.778169 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.778231 kubelet[2649]: E0527 02:53:25.778181 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.778418 kubelet[2649]: E0527 02:53:25.778387 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.778418 kubelet[2649]: W0527 02:53:25.778399 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.778418 kubelet[2649]: E0527 02:53:25.778411 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.778599 kubelet[2649]: E0527 02:53:25.778566 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.778599 kubelet[2649]: W0527 02:53:25.778578 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.778599 kubelet[2649]: E0527 02:53:25.778588 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.778839 kubelet[2649]: E0527 02:53:25.778824 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.778839 kubelet[2649]: W0527 02:53:25.778837 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.778915 kubelet[2649]: E0527 02:53:25.778852 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.779038 kubelet[2649]: E0527 02:53:25.779019 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.779073 kubelet[2649]: W0527 02:53:25.779039 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.779073 kubelet[2649]: E0527 02:53:25.779060 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.779203 kubelet[2649]: E0527 02:53:25.779191 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.779203 kubelet[2649]: W0527 02:53:25.779202 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.779275 kubelet[2649]: E0527 02:53:25.779215 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.779360 kubelet[2649]: E0527 02:53:25.779348 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.779360 kubelet[2649]: W0527 02:53:25.779359 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.779422 kubelet[2649]: E0527 02:53:25.779372 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.779523 kubelet[2649]: E0527 02:53:25.779512 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.779561 kubelet[2649]: W0527 02:53:25.779523 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.779561 kubelet[2649]: E0527 02:53:25.779536 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.779778 kubelet[2649]: E0527 02:53:25.779758 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.779899 kubelet[2649]: W0527 02:53:25.779826 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.779899 kubelet[2649]: E0527 02:53:25.779850 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.780184 kubelet[2649]: E0527 02:53:25.780091 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.780184 kubelet[2649]: W0527 02:53:25.780101 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.780184 kubelet[2649]: E0527 02:53:25.780125 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.780339 kubelet[2649]: E0527 02:53:25.780330 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.780463 kubelet[2649]: W0527 02:53:25.780379 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.780463 kubelet[2649]: E0527 02:53:25.780438 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.780624 kubelet[2649]: E0527 02:53:25.780580 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.780624 kubelet[2649]: W0527 02:53:25.780591 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.780624 kubelet[2649]: E0527 02:53:25.780605 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.780838 kubelet[2649]: E0527 02:53:25.780824 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.780883 kubelet[2649]: W0527 02:53:25.780869 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.780908 kubelet[2649]: E0527 02:53:25.780888 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.781072 kubelet[2649]: E0527 02:53:25.781063 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.781099 kubelet[2649]: W0527 02:53:25.781073 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.781099 kubelet[2649]: E0527 02:53:25.781088 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.781326 kubelet[2649]: E0527 02:53:25.781298 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.781358 kubelet[2649]: W0527 02:53:25.781326 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.781358 kubelet[2649]: E0527 02:53:25.781337 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.781496 kubelet[2649]: E0527 02:53:25.781485 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.781496 kubelet[2649]: W0527 02:53:25.781495 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.781542 kubelet[2649]: E0527 02:53:25.781509 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.781659 kubelet[2649]: E0527 02:53:25.781646 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.781688 kubelet[2649]: W0527 02:53:25.781659 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.781688 kubelet[2649]: E0527 02:53:25.781672 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:25.781822 kubelet[2649]: E0527 02:53:25.781814 2649 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 27 02:53:25.781847 kubelet[2649]: W0527 02:53:25.781822 2649 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 27 02:53:25.781847 kubelet[2649]: E0527 02:53:25.781831 2649 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 27 02:53:26.556249 containerd[1526]: time="2025-05-27T02:53:26.556200735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:26.557054 containerd[1526]: time="2025-05-27T02:53:26.556882031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=4264304" May 27 02:53:26.557650 containerd[1526]: time="2025-05-27T02:53:26.557620894Z" level=info msg="ImageCreate event name:\"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:26.559726 containerd[1526]: time="2025-05-27T02:53:26.559691945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:26.560330 containerd[1526]: time="2025-05-27T02:53:26.560276707Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5633505\" in 1.231694092s" May 27 02:53:26.560403 containerd[1526]: time="2025-05-27T02:53:26.560306191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:080eaf4c238c85534b61055c31b109c96ce3d20075391e58988541a442c7c701\"" May 27 02:53:26.562372 containerd[1526]: time="2025-05-27T02:53:26.562339597Z" level=info msg="CreateContainer within sandbox \"bf1873b8fa060c83ead89e3b458aac7cf9a807a0c2c75792f537ded4c805e4d1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 27 02:53:26.571440 containerd[1526]: time="2025-05-27T02:53:26.569446955Z" level=info msg="Container 34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:26.574019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3136959272.mount: Deactivated successfully. May 27 02:53:26.585559 containerd[1526]: time="2025-05-27T02:53:26.585445161Z" level=info msg="CreateContainer within sandbox \"bf1873b8fa060c83ead89e3b458aac7cf9a807a0c2c75792f537ded4c805e4d1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0\"" May 27 02:53:26.586356 containerd[1526]: time="2025-05-27T02:53:26.586331046Z" level=info msg="StartContainer for \"34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0\"" May 27 02:53:26.587670 containerd[1526]: time="2025-05-27T02:53:26.587641190Z" level=info msg="connecting to shim 34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0" address="unix:///run/containerd/s/86e6334c6101e5e546fb31c1babd88dd7c3936d0ee94e0ecfe059b6b4ec75ba1" protocol=ttrpc version=3 May 27 02:53:26.614454 systemd[1]: Started cri-containerd-34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0.scope - libcontainer container 34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0. May 27 02:53:26.647041 containerd[1526]: time="2025-05-27T02:53:26.645246238Z" level=info msg="StartContainer for \"34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0\" returns successfully" May 27 02:53:26.664945 systemd[1]: cri-containerd-34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0.scope: Deactivated successfully. May 27 02:53:26.665471 systemd[1]: cri-containerd-34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0.scope: Consumed 36ms CPU time, 6.2M memory peak, 4.5M written to disk. May 27 02:53:26.691771 containerd[1526]: time="2025-05-27T02:53:26.691722724Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0\" id:\"34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0\" pid:3346 exited_at:{seconds:1748314406 nanos:682274957}" May 27 02:53:26.693300 containerd[1526]: time="2025-05-27T02:53:26.693251298Z" level=info msg="received exit event container_id:\"34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0\" id:\"34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0\" pid:3346 exited_at:{seconds:1748314406 nanos:682274957}" May 27 02:53:26.720776 kubelet[2649]: I0527 02:53:26.720372 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 02:53:26.747958 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34c133764175ff7c8a669b5144417c3cc9ce6cb51c433d932ad749207d5cf2d0-rootfs.mount: Deactivated successfully. May 27 02:53:27.634603 kubelet[2649]: E0527 02:53:27.634551 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jq8vl" podUID="3e08f79c-1eaa-47ef-a241-1357b91487af" May 27 02:53:27.724452 containerd[1526]: time="2025-05-27T02:53:27.724417402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 27 02:53:29.378271 kubelet[2649]: I0527 02:53:29.378232 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 02:53:29.634955 kubelet[2649]: E0527 02:53:29.634703 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jq8vl" podUID="3e08f79c-1eaa-47ef-a241-1357b91487af" May 27 02:53:30.629348 containerd[1526]: time="2025-05-27T02:53:30.629270371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:30.629799 containerd[1526]: time="2025-05-27T02:53:30.629746867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=65748976" May 27 02:53:30.630593 containerd[1526]: time="2025-05-27T02:53:30.630567805Z" level=info msg="ImageCreate event name:\"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:30.632800 containerd[1526]: time="2025-05-27T02:53:30.632760344Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:30.633895 containerd[1526]: time="2025-05-27T02:53:30.633864515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"67118217\" in 2.909408828s" May 27 02:53:30.633998 containerd[1526]: time="2025-05-27T02:53:30.633982689Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:0a1b3d5412de2974bc057a3463a132f935c307bc06d5b990ad54031e1f5a351d\"" May 27 02:53:30.636457 containerd[1526]: time="2025-05-27T02:53:30.636423338Z" level=info msg="CreateContainer within sandbox \"bf1873b8fa060c83ead89e3b458aac7cf9a807a0c2c75792f537ded4c805e4d1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 27 02:53:30.645747 containerd[1526]: time="2025-05-27T02:53:30.645694436Z" level=info msg="Container db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:30.654155 containerd[1526]: time="2025-05-27T02:53:30.654103552Z" level=info msg="CreateContainer within sandbox \"bf1873b8fa060c83ead89e3b458aac7cf9a807a0c2c75792f537ded4c805e4d1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c\"" May 27 02:53:30.654903 containerd[1526]: time="2025-05-27T02:53:30.654872924Z" level=info msg="StartContainer for \"db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c\"" May 27 02:53:30.656457 containerd[1526]: time="2025-05-27T02:53:30.656413306Z" level=info msg="connecting to shim db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c" address="unix:///run/containerd/s/86e6334c6101e5e546fb31c1babd88dd7c3936d0ee94e0ecfe059b6b4ec75ba1" protocol=ttrpc version=3 May 27 02:53:30.677512 systemd[1]: Started cri-containerd-db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c.scope - libcontainer container db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c. May 27 02:53:30.717847 containerd[1526]: time="2025-05-27T02:53:30.717510463Z" level=info msg="StartContainer for \"db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c\" returns successfully" May 27 02:53:31.361714 systemd[1]: cri-containerd-db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c.scope: Deactivated successfully. May 27 02:53:31.362029 systemd[1]: cri-containerd-db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c.scope: Consumed 513ms CPU time, 171.2M memory peak, 1.6M read from disk, 165.5M written to disk. May 27 02:53:31.365811 containerd[1526]: time="2025-05-27T02:53:31.365776666Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c\" id:\"db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c\" pid:3409 exited_at:{seconds:1748314411 nanos:365083467}" May 27 02:53:31.367453 containerd[1526]: time="2025-05-27T02:53:31.367415053Z" level=info msg="received exit event container_id:\"db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c\" id:\"db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c\" pid:3409 exited_at:{seconds:1748314411 nanos:365083467}" May 27 02:53:31.387887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db7d95c950c2481deb7e073339a2ef29c8ce8710169caa86520a18ca4e1fd32c-rootfs.mount: Deactivated successfully. May 27 02:53:31.392729 kubelet[2649]: I0527 02:53:31.392695 2649 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 02:53:31.482460 systemd[1]: Created slice kubepods-burstable-pod1cf6eb9f_a9dc_4c12_98a3_c40432c7841b.slice - libcontainer container kubepods-burstable-pod1cf6eb9f_a9dc_4c12_98a3_c40432c7841b.slice. May 27 02:53:31.496621 systemd[1]: Created slice kubepods-besteffort-podfa3f2bac_b19c_48be_9c4a_939e92dbf744.slice - libcontainer container kubepods-besteffort-podfa3f2bac_b19c_48be_9c4a_939e92dbf744.slice. May 27 02:53:31.504013 systemd[1]: Created slice kubepods-besteffort-podf1e80ce0_9104_4f5e_8461_05e2f0fe3a75.slice - libcontainer container kubepods-besteffort-podf1e80ce0_9104_4f5e_8461_05e2f0fe3a75.slice. May 27 02:53:31.510969 systemd[1]: Created slice kubepods-besteffort-pod3a13e2c8_3ee1_4d3c_8d96_1e0ff5c9e215.slice - libcontainer container kubepods-besteffort-pod3a13e2c8_3ee1_4d3c_8d96_1e0ff5c9e215.slice. May 27 02:53:31.515131 kubelet[2649]: I0527 02:53:31.515091 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbmkl\" (UniqueName: \"kubernetes.io/projected/db1824d8-31c5-478d-b9fc-7ea21fbef306-kube-api-access-mbmkl\") pod \"coredns-668d6bf9bc-nwmms\" (UID: \"db1824d8-31c5-478d-b9fc-7ea21fbef306\") " pod="kube-system/coredns-668d6bf9bc-nwmms" May 27 02:53:31.515131 kubelet[2649]: I0527 02:53:31.515134 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rp5vh\" (UniqueName: \"kubernetes.io/projected/f1e80ce0-9104-4f5e-8461-05e2f0fe3a75-kube-api-access-rp5vh\") pod \"calico-apiserver-699dbd9b98-vpb9z\" (UID: \"f1e80ce0-9104-4f5e-8461-05e2f0fe3a75\") " pod="calico-apiserver/calico-apiserver-699dbd9b98-vpb9z" May 27 02:53:31.515497 kubelet[2649]: I0527 02:53:31.515154 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db1824d8-31c5-478d-b9fc-7ea21fbef306-config-volume\") pod \"coredns-668d6bf9bc-nwmms\" (UID: \"db1824d8-31c5-478d-b9fc-7ea21fbef306\") " pod="kube-system/coredns-668d6bf9bc-nwmms" May 27 02:53:31.515497 kubelet[2649]: I0527 02:53:31.515240 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1cf6eb9f-a9dc-4c12-98a3-c40432c7841b-config-volume\") pod \"coredns-668d6bf9bc-j75ql\" (UID: \"1cf6eb9f-a9dc-4c12-98a3-c40432c7841b\") " pod="kube-system/coredns-668d6bf9bc-j75ql" May 27 02:53:31.515497 kubelet[2649]: I0527 02:53:31.515274 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f1e80ce0-9104-4f5e-8461-05e2f0fe3a75-calico-apiserver-certs\") pod \"calico-apiserver-699dbd9b98-vpb9z\" (UID: \"f1e80ce0-9104-4f5e-8461-05e2f0fe3a75\") " pod="calico-apiserver/calico-apiserver-699dbd9b98-vpb9z" May 27 02:53:31.515497 kubelet[2649]: I0527 02:53:31.515337 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hz6cz\" (UniqueName: \"kubernetes.io/projected/fa3f2bac-b19c-48be-9c4a-939e92dbf744-kube-api-access-hz6cz\") pod \"calico-kube-controllers-74b44f47cd-9h674\" (UID: \"fa3f2bac-b19c-48be-9c4a-939e92dbf744\") " pod="calico-system/calico-kube-controllers-74b44f47cd-9h674" May 27 02:53:31.515497 kubelet[2649]: I0527 02:53:31.515363 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w2cs\" (UniqueName: \"kubernetes.io/projected/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-kube-api-access-9w2cs\") pod \"whisker-757df98665-s4dh7\" (UID: \"3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215\") " pod="calico-system/whisker-757df98665-s4dh7" May 27 02:53:31.515621 kubelet[2649]: I0527 02:53:31.515408 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-whisker-backend-key-pair\") pod \"whisker-757df98665-s4dh7\" (UID: \"3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215\") " pod="calico-system/whisker-757df98665-s4dh7" May 27 02:53:31.515621 kubelet[2649]: I0527 02:53:31.515429 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv9q5\" (UniqueName: \"kubernetes.io/projected/1cf6eb9f-a9dc-4c12-98a3-c40432c7841b-kube-api-access-gv9q5\") pod \"coredns-668d6bf9bc-j75ql\" (UID: \"1cf6eb9f-a9dc-4c12-98a3-c40432c7841b\") " pod="kube-system/coredns-668d6bf9bc-j75ql" May 27 02:53:31.515621 kubelet[2649]: I0527 02:53:31.515487 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fa3f2bac-b19c-48be-9c4a-939e92dbf744-tigera-ca-bundle\") pod \"calico-kube-controllers-74b44f47cd-9h674\" (UID: \"fa3f2bac-b19c-48be-9c4a-939e92dbf744\") " pod="calico-system/calico-kube-controllers-74b44f47cd-9h674" May 27 02:53:31.515621 kubelet[2649]: I0527 02:53:31.515516 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-whisker-ca-bundle\") pod \"whisker-757df98665-s4dh7\" (UID: \"3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215\") " pod="calico-system/whisker-757df98665-s4dh7" May 27 02:53:31.521035 systemd[1]: Created slice kubepods-burstable-poddb1824d8_31c5_478d_b9fc_7ea21fbef306.slice - libcontainer container kubepods-burstable-poddb1824d8_31c5_478d_b9fc_7ea21fbef306.slice. May 27 02:53:31.525872 systemd[1]: Created slice kubepods-besteffort-pod2f5769c9_834e_4b2c_8b1f_7f5c674d09af.slice - libcontainer container kubepods-besteffort-pod2f5769c9_834e_4b2c_8b1f_7f5c674d09af.slice. May 27 02:53:31.533649 systemd[1]: Created slice kubepods-besteffort-pod88e5918f_832b_4950_9cef_4b797dda53c9.slice - libcontainer container kubepods-besteffort-pod88e5918f_832b_4950_9cef_4b797dda53c9.slice. May 27 02:53:31.619794 kubelet[2649]: I0527 02:53:31.619682 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/88e5918f-832b-4950-9cef-4b797dda53c9-goldmane-key-pair\") pod \"goldmane-78d55f7ddc-nqjnl\" (UID: \"88e5918f-832b-4950-9cef-4b797dda53c9\") " pod="calico-system/goldmane-78d55f7ddc-nqjnl" May 27 02:53:31.622722 kubelet[2649]: I0527 02:53:31.620371 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88e5918f-832b-4950-9cef-4b797dda53c9-goldmane-ca-bundle\") pod \"goldmane-78d55f7ddc-nqjnl\" (UID: \"88e5918f-832b-4950-9cef-4b797dda53c9\") " pod="calico-system/goldmane-78d55f7ddc-nqjnl" May 27 02:53:31.622722 kubelet[2649]: I0527 02:53:31.620430 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2f5769c9-834e-4b2c-8b1f-7f5c674d09af-calico-apiserver-certs\") pod \"calico-apiserver-699dbd9b98-5tz6h\" (UID: \"2f5769c9-834e-4b2c-8b1f-7f5c674d09af\") " pod="calico-apiserver/calico-apiserver-699dbd9b98-5tz6h" May 27 02:53:31.622722 kubelet[2649]: I0527 02:53:31.620448 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/88e5918f-832b-4950-9cef-4b797dda53c9-config\") pod \"goldmane-78d55f7ddc-nqjnl\" (UID: \"88e5918f-832b-4950-9cef-4b797dda53c9\") " pod="calico-system/goldmane-78d55f7ddc-nqjnl" May 27 02:53:31.626401 kubelet[2649]: I0527 02:53:31.626372 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6smg\" (UniqueName: \"kubernetes.io/projected/2f5769c9-834e-4b2c-8b1f-7f5c674d09af-kube-api-access-x6smg\") pod \"calico-apiserver-699dbd9b98-5tz6h\" (UID: \"2f5769c9-834e-4b2c-8b1f-7f5c674d09af\") " pod="calico-apiserver/calico-apiserver-699dbd9b98-5tz6h" May 27 02:53:31.626485 kubelet[2649]: I0527 02:53:31.626421 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xth2m\" (UniqueName: \"kubernetes.io/projected/88e5918f-832b-4950-9cef-4b797dda53c9-kube-api-access-xth2m\") pod \"goldmane-78d55f7ddc-nqjnl\" (UID: \"88e5918f-832b-4950-9cef-4b797dda53c9\") " pod="calico-system/goldmane-78d55f7ddc-nqjnl" May 27 02:53:31.642797 systemd[1]: Created slice kubepods-besteffort-pod3e08f79c_1eaa_47ef_a241_1357b91487af.slice - libcontainer container kubepods-besteffort-pod3e08f79c_1eaa_47ef_a241_1357b91487af.slice. May 27 02:53:31.656246 containerd[1526]: time="2025-05-27T02:53:31.656179871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jq8vl,Uid:3e08f79c-1eaa-47ef-a241-1357b91487af,Namespace:calico-system,Attempt:0,}" May 27 02:53:31.750064 containerd[1526]: time="2025-05-27T02:53:31.749931459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 27 02:53:31.789288 containerd[1526]: time="2025-05-27T02:53:31.789248773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j75ql,Uid:1cf6eb9f-a9dc-4c12-98a3-c40432c7841b,Namespace:kube-system,Attempt:0,}" May 27 02:53:31.800443 containerd[1526]: time="2025-05-27T02:53:31.800396841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b44f47cd-9h674,Uid:fa3f2bac-b19c-48be-9c4a-939e92dbf744,Namespace:calico-system,Attempt:0,}" May 27 02:53:31.820261 containerd[1526]: time="2025-05-27T02:53:31.812551264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699dbd9b98-vpb9z,Uid:f1e80ce0-9104-4f5e-8461-05e2f0fe3a75,Namespace:calico-apiserver,Attempt:0,}" May 27 02:53:31.820554 containerd[1526]: time="2025-05-27T02:53:31.820521251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757df98665-s4dh7,Uid:3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215,Namespace:calico-system,Attempt:0,}" May 27 02:53:31.828619 containerd[1526]: time="2025-05-27T02:53:31.827509886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nwmms,Uid:db1824d8-31c5-478d-b9fc-7ea21fbef306,Namespace:kube-system,Attempt:0,}" May 27 02:53:31.838262 containerd[1526]: time="2025-05-27T02:53:31.837564190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-nqjnl,Uid:88e5918f-832b-4950-9cef-4b797dda53c9,Namespace:calico-system,Attempt:0,}" May 27 02:53:31.848237 containerd[1526]: time="2025-05-27T02:53:31.842974766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699dbd9b98-5tz6h,Uid:2f5769c9-834e-4b2c-8b1f-7f5c674d09af,Namespace:calico-apiserver,Attempt:0,}" May 27 02:53:32.198545 containerd[1526]: time="2025-05-27T02:53:32.198491546Z" level=error msg="Failed to destroy network for sandbox \"765c0525b14bdca9d80dcf065a44d6cebc9936ed8c836e48db6ec8fcf0fbd8ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.201550 containerd[1526]: time="2025-05-27T02:53:32.201426907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b44f47cd-9h674,Uid:fa3f2bac-b19c-48be-9c4a-939e92dbf744,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"765c0525b14bdca9d80dcf065a44d6cebc9936ed8c836e48db6ec8fcf0fbd8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.204400 kubelet[2649]: E0527 02:53:32.204327 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765c0525b14bdca9d80dcf065a44d6cebc9936ed8c836e48db6ec8fcf0fbd8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.207976 kubelet[2649]: E0527 02:53:32.207733 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765c0525b14bdca9d80dcf065a44d6cebc9936ed8c836e48db6ec8fcf0fbd8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b44f47cd-9h674" May 27 02:53:32.207976 kubelet[2649]: E0527 02:53:32.207783 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"765c0525b14bdca9d80dcf065a44d6cebc9936ed8c836e48db6ec8fcf0fbd8ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74b44f47cd-9h674" May 27 02:53:32.207976 kubelet[2649]: E0527 02:53:32.207839 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74b44f47cd-9h674_calico-system(fa3f2bac-b19c-48be-9c4a-939e92dbf744)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74b44f47cd-9h674_calico-system(fa3f2bac-b19c-48be-9c4a-939e92dbf744)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"765c0525b14bdca9d80dcf065a44d6cebc9936ed8c836e48db6ec8fcf0fbd8ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74b44f47cd-9h674" podUID="fa3f2bac-b19c-48be-9c4a-939e92dbf744" May 27 02:53:32.216320 containerd[1526]: time="2025-05-27T02:53:32.216266291Z" level=error msg="Failed to destroy network for sandbox \"b323b00247fa90cceea8ba82dfffadaed4fb0379c7f037a25af2e0eca3fbd9f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.219286 containerd[1526]: time="2025-05-27T02:53:32.219233896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699dbd9b98-5tz6h,Uid:2f5769c9-834e-4b2c-8b1f-7f5c674d09af,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b323b00247fa90cceea8ba82dfffadaed4fb0379c7f037a25af2e0eca3fbd9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.219550 kubelet[2649]: E0527 02:53:32.219512 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b323b00247fa90cceea8ba82dfffadaed4fb0379c7f037a25af2e0eca3fbd9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.219634 kubelet[2649]: E0527 02:53:32.219572 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b323b00247fa90cceea8ba82dfffadaed4fb0379c7f037a25af2e0eca3fbd9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699dbd9b98-5tz6h" May 27 02:53:32.219634 kubelet[2649]: E0527 02:53:32.219591 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b323b00247fa90cceea8ba82dfffadaed4fb0379c7f037a25af2e0eca3fbd9f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699dbd9b98-5tz6h" May 27 02:53:32.219686 kubelet[2649]: E0527 02:53:32.219638 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-699dbd9b98-5tz6h_calico-apiserver(2f5769c9-834e-4b2c-8b1f-7f5c674d09af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-699dbd9b98-5tz6h_calico-apiserver(2f5769c9-834e-4b2c-8b1f-7f5c674d09af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b323b00247fa90cceea8ba82dfffadaed4fb0379c7f037a25af2e0eca3fbd9f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699dbd9b98-5tz6h" podUID="2f5769c9-834e-4b2c-8b1f-7f5c674d09af" May 27 02:53:32.222849 containerd[1526]: time="2025-05-27T02:53:32.222753641Z" level=error msg="Failed to destroy network for sandbox \"a78b5927eb74c26f004ac1fdbc1ad0b0b17bd13867fa22059e3ea18ee78421dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.224186 containerd[1526]: time="2025-05-27T02:53:32.224119950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699dbd9b98-vpb9z,Uid:f1e80ce0-9104-4f5e-8461-05e2f0fe3a75,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a78b5927eb74c26f004ac1fdbc1ad0b0b17bd13867fa22059e3ea18ee78421dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.224388 kubelet[2649]: E0527 02:53:32.224343 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a78b5927eb74c26f004ac1fdbc1ad0b0b17bd13867fa22059e3ea18ee78421dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.224441 kubelet[2649]: E0527 02:53:32.224398 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a78b5927eb74c26f004ac1fdbc1ad0b0b17bd13867fa22059e3ea18ee78421dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699dbd9b98-vpb9z" May 27 02:53:32.224441 kubelet[2649]: E0527 02:53:32.224419 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a78b5927eb74c26f004ac1fdbc1ad0b0b17bd13867fa22059e3ea18ee78421dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-699dbd9b98-vpb9z" May 27 02:53:32.224488 kubelet[2649]: E0527 02:53:32.224450 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-699dbd9b98-vpb9z_calico-apiserver(f1e80ce0-9104-4f5e-8461-05e2f0fe3a75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-699dbd9b98-vpb9z_calico-apiserver(f1e80ce0-9104-4f5e-8461-05e2f0fe3a75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a78b5927eb74c26f004ac1fdbc1ad0b0b17bd13867fa22059e3ea18ee78421dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-699dbd9b98-vpb9z" podUID="f1e80ce0-9104-4f5e-8461-05e2f0fe3a75" May 27 02:53:32.226374 containerd[1526]: time="2025-05-27T02:53:32.225923148Z" level=error msg="Failed to destroy network for sandbox \"d0c39ce7f9fc745edbdb866502b6676297c65e522b324b58b402a3d0f967f91b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.226996 containerd[1526]: time="2025-05-27T02:53:32.226967102Z" level=error msg="Failed to destroy network for sandbox \"78b8672129284a45d794c7a4da4b3f64a8aea0fc92575e4bdc78dcb436aaba66\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.227419 containerd[1526]: time="2025-05-27T02:53:32.227360425Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757df98665-s4dh7,Uid:3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c39ce7f9fc745edbdb866502b6676297c65e522b324b58b402a3d0f967f91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.227808 kubelet[2649]: E0527 02:53:32.227660 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c39ce7f9fc745edbdb866502b6676297c65e522b324b58b402a3d0f967f91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.227808 kubelet[2649]: E0527 02:53:32.227710 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c39ce7f9fc745edbdb866502b6676297c65e522b324b58b402a3d0f967f91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-757df98665-s4dh7" May 27 02:53:32.227808 kubelet[2649]: E0527 02:53:32.227733 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0c39ce7f9fc745edbdb866502b6676297c65e522b324b58b402a3d0f967f91b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-757df98665-s4dh7" May 27 02:53:32.227908 kubelet[2649]: E0527 02:53:32.227770 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-757df98665-s4dh7_calico-system(3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-757df98665-s4dh7_calico-system(3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0c39ce7f9fc745edbdb866502b6676297c65e522b324b58b402a3d0f967f91b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-757df98665-s4dh7" podUID="3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215" May 27 02:53:32.229438 containerd[1526]: time="2025-05-27T02:53:32.229391327Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j75ql,Uid:1cf6eb9f-a9dc-4c12-98a3-c40432c7841b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78b8672129284a45d794c7a4da4b3f64a8aea0fc92575e4bdc78dcb436aaba66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.229967 kubelet[2649]: E0527 02:53:32.229889 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78b8672129284a45d794c7a4da4b3f64a8aea0fc92575e4bdc78dcb436aaba66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.230039 kubelet[2649]: E0527 02:53:32.229978 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78b8672129284a45d794c7a4da4b3f64a8aea0fc92575e4bdc78dcb436aaba66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j75ql" May 27 02:53:32.230039 kubelet[2649]: E0527 02:53:32.230014 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78b8672129284a45d794c7a4da4b3f64a8aea0fc92575e4bdc78dcb436aaba66\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j75ql" May 27 02:53:32.230085 kubelet[2649]: E0527 02:53:32.230051 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j75ql_kube-system(1cf6eb9f-a9dc-4c12-98a3-c40432c7841b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j75ql_kube-system(1cf6eb9f-a9dc-4c12-98a3-c40432c7841b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78b8672129284a45d794c7a4da4b3f64a8aea0fc92575e4bdc78dcb436aaba66\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j75ql" podUID="1cf6eb9f-a9dc-4c12-98a3-c40432c7841b" May 27 02:53:32.232193 containerd[1526]: time="2025-05-27T02:53:32.232152509Z" level=error msg="Failed to destroy network for sandbox \"d50b9dfb1d425f7f6bd284e01d75a53a5c384b64f118ba51e1c38506034a66d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.233189 containerd[1526]: time="2025-05-27T02:53:32.233147938Z" level=error msg="Failed to destroy network for sandbox \"3d6440b8e87cda5f69b6a61e87aeeca75aa8b977ad4a7cba0fd4c3ebaaba56ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.234622 containerd[1526]: time="2025-05-27T02:53:32.234576575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jq8vl,Uid:3e08f79c-1eaa-47ef-a241-1357b91487af,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50b9dfb1d425f7f6bd284e01d75a53a5c384b64f118ba51e1c38506034a66d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.235143 kubelet[2649]: E0527 02:53:32.234914 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50b9dfb1d425f7f6bd284e01d75a53a5c384b64f118ba51e1c38506034a66d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.235143 kubelet[2649]: E0527 02:53:32.234980 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50b9dfb1d425f7f6bd284e01d75a53a5c384b64f118ba51e1c38506034a66d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jq8vl" May 27 02:53:32.235143 kubelet[2649]: E0527 02:53:32.234996 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d50b9dfb1d425f7f6bd284e01d75a53a5c384b64f118ba51e1c38506034a66d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jq8vl" May 27 02:53:32.235242 kubelet[2649]: E0527 02:53:32.235027 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jq8vl_calico-system(3e08f79c-1eaa-47ef-a241-1357b91487af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jq8vl_calico-system(3e08f79c-1eaa-47ef-a241-1357b91487af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d50b9dfb1d425f7f6bd284e01d75a53a5c384b64f118ba51e1c38506034a66d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jq8vl" podUID="3e08f79c-1eaa-47ef-a241-1357b91487af" May 27 02:53:32.235287 containerd[1526]: time="2025-05-27T02:53:32.235162359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nwmms,Uid:db1824d8-31c5-478d-b9fc-7ea21fbef306,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d6440b8e87cda5f69b6a61e87aeeca75aa8b977ad4a7cba0fd4c3ebaaba56ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.235470 kubelet[2649]: E0527 02:53:32.235435 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d6440b8e87cda5f69b6a61e87aeeca75aa8b977ad4a7cba0fd4c3ebaaba56ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.235507 kubelet[2649]: E0527 02:53:32.235480 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d6440b8e87cda5f69b6a61e87aeeca75aa8b977ad4a7cba0fd4c3ebaaba56ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nwmms" May 27 02:53:32.235535 kubelet[2649]: E0527 02:53:32.235505 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d6440b8e87cda5f69b6a61e87aeeca75aa8b977ad4a7cba0fd4c3ebaaba56ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-nwmms" May 27 02:53:32.235561 kubelet[2649]: E0527 02:53:32.235539 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-nwmms_kube-system(db1824d8-31c5-478d-b9fc-7ea21fbef306)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-nwmms_kube-system(db1824d8-31c5-478d-b9fc-7ea21fbef306)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d6440b8e87cda5f69b6a61e87aeeca75aa8b977ad4a7cba0fd4c3ebaaba56ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-nwmms" podUID="db1824d8-31c5-478d-b9fc-7ea21fbef306" May 27 02:53:32.238449 containerd[1526]: time="2025-05-27T02:53:32.238416475Z" level=error msg="Failed to destroy network for sandbox \"5824952bd82d0c557e76584665f9ef7f507704cdf491e2e78da6a4ac2115d078\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.239420 containerd[1526]: time="2025-05-27T02:53:32.239282570Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-nqjnl,Uid:88e5918f-832b-4950-9cef-4b797dda53c9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5824952bd82d0c557e76584665f9ef7f507704cdf491e2e78da6a4ac2115d078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.239836 kubelet[2649]: E0527 02:53:32.239573 2649 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5824952bd82d0c557e76584665f9ef7f507704cdf491e2e78da6a4ac2115d078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 27 02:53:32.239836 kubelet[2649]: E0527 02:53:32.239608 2649 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5824952bd82d0c557e76584665f9ef7f507704cdf491e2e78da6a4ac2115d078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-nqjnl" May 27 02:53:32.239836 kubelet[2649]: E0527 02:53:32.239623 2649 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5824952bd82d0c557e76584665f9ef7f507704cdf491e2e78da6a4ac2115d078\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-78d55f7ddc-nqjnl" May 27 02:53:32.239916 kubelet[2649]: E0527 02:53:32.239652 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-78d55f7ddc-nqjnl_calico-system(88e5918f-832b-4950-9cef-4b797dda53c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-78d55f7ddc-nqjnl_calico-system(88e5918f-832b-4950-9cef-4b797dda53c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5824952bd82d0c557e76584665f9ef7f507704cdf491e2e78da6a4ac2115d078\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-78d55f7ddc-nqjnl" podUID="88e5918f-832b-4950-9cef-4b797dda53c9" May 27 02:53:32.646679 systemd[1]: run-netns-cni\x2dfafbf25e\x2d59c3\x2d5c5b\x2de34d\x2d6bd7c0195c88.mount: Deactivated successfully. May 27 02:53:35.557450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992061319.mount: Deactivated successfully. May 27 02:53:35.832167 containerd[1526]: time="2025-05-27T02:53:35.832045548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:35.832663 containerd[1526]: time="2025-05-27T02:53:35.832614764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=150465379" May 27 02:53:35.834446 containerd[1526]: time="2025-05-27T02:53:35.834415861Z" level=info msg="ImageCreate event name:\"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:35.836184 containerd[1526]: time="2025-05-27T02:53:35.836157511Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:35.836788 containerd[1526]: time="2025-05-27T02:53:35.836629837Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"150465241\" in 4.086651173s" May 27 02:53:35.836788 containerd[1526]: time="2025-05-27T02:53:35.836662720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:f7148fde8e28b27da58f84cac134cdc53b5df321cda13c660192f06839670732\"" May 27 02:53:35.866157 containerd[1526]: time="2025-05-27T02:53:35.866095601Z" level=info msg="CreateContainer within sandbox \"bf1873b8fa060c83ead89e3b458aac7cf9a807a0c2c75792f537ded4c805e4d1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 27 02:53:35.872335 containerd[1526]: time="2025-05-27T02:53:35.872184117Z" level=info msg="Container 3da46409b40208ba48974be6badb9ab78fd51f411c1d82bfafe91ca15ca4cba8: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:35.889658 containerd[1526]: time="2025-05-27T02:53:35.889605863Z" level=info msg="CreateContainer within sandbox \"bf1873b8fa060c83ead89e3b458aac7cf9a807a0c2c75792f537ded4c805e4d1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"3da46409b40208ba48974be6badb9ab78fd51f411c1d82bfafe91ca15ca4cba8\"" May 27 02:53:35.890846 containerd[1526]: time="2025-05-27T02:53:35.890795139Z" level=info msg="StartContainer for \"3da46409b40208ba48974be6badb9ab78fd51f411c1d82bfafe91ca15ca4cba8\"" May 27 02:53:35.892544 containerd[1526]: time="2025-05-27T02:53:35.892514027Z" level=info msg="connecting to shim 3da46409b40208ba48974be6badb9ab78fd51f411c1d82bfafe91ca15ca4cba8" address="unix:///run/containerd/s/86e6334c6101e5e546fb31c1babd88dd7c3936d0ee94e0ecfe059b6b4ec75ba1" protocol=ttrpc version=3 May 27 02:53:35.918589 systemd[1]: Started cri-containerd-3da46409b40208ba48974be6badb9ab78fd51f411c1d82bfafe91ca15ca4cba8.scope - libcontainer container 3da46409b40208ba48974be6badb9ab78fd51f411c1d82bfafe91ca15ca4cba8. May 27 02:53:35.983964 containerd[1526]: time="2025-05-27T02:53:35.983902813Z" level=info msg="StartContainer for \"3da46409b40208ba48974be6badb9ab78fd51f411c1d82bfafe91ca15ca4cba8\" returns successfully" May 27 02:53:36.281805 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 27 02:53:36.281956 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 27 02:53:36.459566 kubelet[2649]: I0527 02:53:36.459474 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w2cs\" (UniqueName: \"kubernetes.io/projected/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-kube-api-access-9w2cs\") pod \"3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215\" (UID: \"3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215\") " May 27 02:53:36.462686 kubelet[2649]: I0527 02:53:36.460813 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-whisker-backend-key-pair\") pod \"3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215\" (UID: \"3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215\") " May 27 02:53:36.462686 kubelet[2649]: I0527 02:53:36.460867 2649 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-whisker-ca-bundle\") pod \"3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215\" (UID: \"3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215\") " May 27 02:53:36.463279 kubelet[2649]: I0527 02:53:36.463222 2649 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215" (UID: "3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 02:53:36.466279 kubelet[2649]: I0527 02:53:36.466238 2649 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-kube-api-access-9w2cs" (OuterVolumeSpecName: "kube-api-access-9w2cs") pod "3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215" (UID: "3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215"). InnerVolumeSpecName "kube-api-access-9w2cs". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 02:53:36.472857 kubelet[2649]: I0527 02:53:36.472806 2649 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215" (UID: "3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 02:53:36.558106 systemd[1]: var-lib-kubelet-pods-3a13e2c8\x2d3ee1\x2d4d3c\x2d8d96\x2d1e0ff5c9e215-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9w2cs.mount: Deactivated successfully. May 27 02:53:36.558206 systemd[1]: var-lib-kubelet-pods-3a13e2c8\x2d3ee1\x2d4d3c\x2d8d96\x2d1e0ff5c9e215-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. May 27 02:53:36.561214 kubelet[2649]: I0527 02:53:36.561179 2649 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" May 27 02:53:36.561214 kubelet[2649]: I0527 02:53:36.561213 2649 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9w2cs\" (UniqueName: \"kubernetes.io/projected/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-kube-api-access-9w2cs\") on node \"localhost\" DevicePath \"\"" May 27 02:53:36.561339 kubelet[2649]: I0527 02:53:36.561222 2649 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" May 27 02:53:36.649289 systemd[1]: Removed slice kubepods-besteffort-pod3a13e2c8_3ee1_4d3c_8d96_1e0ff5c9e215.slice - libcontainer container kubepods-besteffort-pod3a13e2c8_3ee1_4d3c_8d96_1e0ff5c9e215.slice. May 27 02:53:36.778971 kubelet[2649]: I0527 02:53:36.778890 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zm44g" podStartSLOduration=1.7447003319999999 podStartE2EDuration="13.778861601s" podCreationTimestamp="2025-05-27 02:53:23 +0000 UTC" firstStartedPulling="2025-05-27 02:53:23.803353975 +0000 UTC m=+19.244498182" lastFinishedPulling="2025-05-27 02:53:35.837515244 +0000 UTC m=+31.278659451" observedRunningTime="2025-05-27 02:53:36.778281306 +0000 UTC m=+32.219425513" watchObservedRunningTime="2025-05-27 02:53:36.778861601 +0000 UTC m=+32.220005808" May 27 02:53:36.839937 systemd[1]: Created slice kubepods-besteffort-podf1690d54_3bd7_4103_a321_b2466ca801ef.slice - libcontainer container kubepods-besteffort-podf1690d54_3bd7_4103_a321_b2466ca801ef.slice. May 27 02:53:36.868601 kubelet[2649]: I0527 02:53:36.868537 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1690d54-3bd7-4103-a321-b2466ca801ef-whisker-backend-key-pair\") pod \"whisker-78d6dbd985-lqqm7\" (UID: \"f1690d54-3bd7-4103-a321-b2466ca801ef\") " pod="calico-system/whisker-78d6dbd985-lqqm7" May 27 02:53:36.868601 kubelet[2649]: I0527 02:53:36.868609 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1690d54-3bd7-4103-a321-b2466ca801ef-whisker-ca-bundle\") pod \"whisker-78d6dbd985-lqqm7\" (UID: \"f1690d54-3bd7-4103-a321-b2466ca801ef\") " pod="calico-system/whisker-78d6dbd985-lqqm7" May 27 02:53:36.868769 kubelet[2649]: I0527 02:53:36.868645 2649 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk7cw\" (UniqueName: \"kubernetes.io/projected/f1690d54-3bd7-4103-a321-b2466ca801ef-kube-api-access-mk7cw\") pod \"whisker-78d6dbd985-lqqm7\" (UID: \"f1690d54-3bd7-4103-a321-b2466ca801ef\") " pod="calico-system/whisker-78d6dbd985-lqqm7" May 27 02:53:36.924353 containerd[1526]: time="2025-05-27T02:53:36.924278184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3da46409b40208ba48974be6badb9ab78fd51f411c1d82bfafe91ca15ca4cba8\" id:\"5b07b11b5e302d228b5050705e4d49fe46d5201210dbd1f27f4e1ab1ae06da56\" pid:3799 exit_status:1 exited_at:{seconds:1748314416 nanos:923912550}" May 27 02:53:37.146386 containerd[1526]: time="2025-05-27T02:53:37.146075648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78d6dbd985-lqqm7,Uid:f1690d54-3bd7-4103-a321-b2466ca801ef,Namespace:calico-system,Attempt:0,}" May 27 02:53:37.412868 systemd-networkd[1439]: calif03f6f985d3: Link UP May 27 02:53:37.413723 systemd-networkd[1439]: calif03f6f985d3: Gained carrier May 27 02:53:37.427666 containerd[1526]: 2025-05-27 02:53:37.235 [INFO][3813] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist May 27 02:53:37.427666 containerd[1526]: 2025-05-27 02:53:37.297 [INFO][3813] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--78d6dbd985--lqqm7-eth0 whisker-78d6dbd985- calico-system f1690d54-3bd7-4103-a321-b2466ca801ef 850 0 2025-05-27 02:53:36 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:78d6dbd985 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-78d6dbd985-lqqm7 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif03f6f985d3 [] [] }} ContainerID="72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" Namespace="calico-system" Pod="whisker-78d6dbd985-lqqm7" WorkloadEndpoint="localhost-k8s-whisker--78d6dbd985--lqqm7-" May 27 02:53:37.427666 containerd[1526]: 2025-05-27 02:53:37.297 [INFO][3813] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" Namespace="calico-system" Pod="whisker-78d6dbd985-lqqm7" WorkloadEndpoint="localhost-k8s-whisker--78d6dbd985--lqqm7-eth0" May 27 02:53:37.427666 containerd[1526]: 2025-05-27 02:53:37.365 [INFO][3827] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" HandleID="k8s-pod-network.72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" Workload="localhost-k8s-whisker--78d6dbd985--lqqm7-eth0" May 27 02:53:37.427869 containerd[1526]: 2025-05-27 02:53:37.365 [INFO][3827] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" HandleID="k8s-pod-network.72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" Workload="localhost-k8s-whisker--78d6dbd985--lqqm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c3b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-78d6dbd985-lqqm7", "timestamp":"2025-05-27 02:53:37.365264669 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:53:37.427869 containerd[1526]: 2025-05-27 02:53:37.365 [INFO][3827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:53:37.427869 containerd[1526]: 2025-05-27 02:53:37.365 [INFO][3827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:53:37.427869 containerd[1526]: 2025-05-27 02:53:37.365 [INFO][3827] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 02:53:37.427869 containerd[1526]: 2025-05-27 02:53:37.375 [INFO][3827] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" host="localhost" May 27 02:53:37.427869 containerd[1526]: 2025-05-27 02:53:37.380 [INFO][3827] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 02:53:37.427869 containerd[1526]: 2025-05-27 02:53:37.385 [INFO][3827] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 02:53:37.427869 containerd[1526]: 2025-05-27 02:53:37.386 [INFO][3827] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 02:53:37.427869 containerd[1526]: 2025-05-27 02:53:37.388 [INFO][3827] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 02:53:37.427869 containerd[1526]: 2025-05-27 02:53:37.388 [INFO][3827] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" host="localhost" May 27 02:53:37.428094 containerd[1526]: 2025-05-27 02:53:37.390 [INFO][3827] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd May 27 02:53:37.428094 containerd[1526]: 2025-05-27 02:53:37.393 [INFO][3827] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" host="localhost" May 27 02:53:37.428094 containerd[1526]: 2025-05-27 02:53:37.398 [INFO][3827] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" host="localhost" May 27 02:53:37.428094 containerd[1526]: 2025-05-27 02:53:37.398 [INFO][3827] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" host="localhost" May 27 02:53:37.428094 containerd[1526]: 2025-05-27 02:53:37.398 [INFO][3827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:53:37.428094 containerd[1526]: 2025-05-27 02:53:37.398 [INFO][3827] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" HandleID="k8s-pod-network.72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" Workload="localhost-k8s-whisker--78d6dbd985--lqqm7-eth0" May 27 02:53:37.428205 containerd[1526]: 2025-05-27 02:53:37.401 [INFO][3813] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" Namespace="calico-system" Pod="whisker-78d6dbd985-lqqm7" WorkloadEndpoint="localhost-k8s-whisker--78d6dbd985--lqqm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--78d6dbd985--lqqm7-eth0", GenerateName:"whisker-78d6dbd985-", Namespace:"calico-system", SelfLink:"", UID:"f1690d54-3bd7-4103-a321-b2466ca801ef", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78d6dbd985", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-78d6dbd985-lqqm7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif03f6f985d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:37.428205 containerd[1526]: 2025-05-27 02:53:37.401 [INFO][3813] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" Namespace="calico-system" Pod="whisker-78d6dbd985-lqqm7" WorkloadEndpoint="localhost-k8s-whisker--78d6dbd985--lqqm7-eth0" May 27 02:53:37.428272 containerd[1526]: 2025-05-27 02:53:37.401 [INFO][3813] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif03f6f985d3 ContainerID="72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" Namespace="calico-system" Pod="whisker-78d6dbd985-lqqm7" WorkloadEndpoint="localhost-k8s-whisker--78d6dbd985--lqqm7-eth0" May 27 02:53:37.428272 containerd[1526]: 2025-05-27 02:53:37.413 [INFO][3813] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" Namespace="calico-system" Pod="whisker-78d6dbd985-lqqm7" WorkloadEndpoint="localhost-k8s-whisker--78d6dbd985--lqqm7-eth0" May 27 02:53:37.428327 containerd[1526]: 2025-05-27 02:53:37.414 [INFO][3813] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" Namespace="calico-system" Pod="whisker-78d6dbd985-lqqm7" WorkloadEndpoint="localhost-k8s-whisker--78d6dbd985--lqqm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--78d6dbd985--lqqm7-eth0", GenerateName:"whisker-78d6dbd985-", Namespace:"calico-system", SelfLink:"", UID:"f1690d54-3bd7-4103-a321-b2466ca801ef", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"78d6dbd985", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd", Pod:"whisker-78d6dbd985-lqqm7", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif03f6f985d3", MAC:"c6:71:b5:c5:8e:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:37.428401 containerd[1526]: 2025-05-27 02:53:37.425 [INFO][3813] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" Namespace="calico-system" Pod="whisker-78d6dbd985-lqqm7" WorkloadEndpoint="localhost-k8s-whisker--78d6dbd985--lqqm7-eth0" May 27 02:53:37.495820 containerd[1526]: time="2025-05-27T02:53:37.495772550Z" level=info msg="connecting to shim 72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd" address="unix:///run/containerd/s/5aa4d8cd99bbcba4276a0b93af6d516e5860443bdcb35710ad14900b0b2314db" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:37.530489 systemd[1]: Started cri-containerd-72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd.scope - libcontainer container 72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd. May 27 02:53:37.542871 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:53:37.568455 containerd[1526]: time="2025-05-27T02:53:37.568407625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-78d6dbd985-lqqm7,Uid:f1690d54-3bd7-4103-a321-b2466ca801ef,Namespace:calico-system,Attempt:0,} returns sandbox id \"72ce447dc397a42b03572c8d36c73f283f8c6fca2fc5555b27620b3c3cd6bafd\"" May 27 02:53:37.569934 containerd[1526]: time="2025-05-27T02:53:37.569770270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 02:53:37.720548 containerd[1526]: time="2025-05-27T02:53:37.720417470Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:53:37.721261 containerd[1526]: time="2025-05-27T02:53:37.721218943Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:53:37.721342 containerd[1526]: time="2025-05-27T02:53:37.721320233Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 02:53:37.721579 kubelet[2649]: E0527 02:53:37.721520 2649 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:53:37.721902 kubelet[2649]: E0527 02:53:37.721581 2649 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:53:37.722105 kubelet[2649]: E0527 02:53:37.721924 2649 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8d00a686b45b4d5f8d5d971f47327b2d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mk7cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d6dbd985-lqqm7_calico-system(f1690d54-3bd7-4103-a321-b2466ca801ef): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:53:37.723792 containerd[1526]: time="2025-05-27T02:53:37.723754815Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 02:53:37.871345 containerd[1526]: time="2025-05-27T02:53:37.871276370Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:53:37.876097 containerd[1526]: time="2025-05-27T02:53:37.876047606Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:53:37.876378 containerd[1526]: time="2025-05-27T02:53:37.876134614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 02:53:37.876536 kubelet[2649]: E0527 02:53:37.876497 2649 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:53:37.876690 kubelet[2649]: E0527 02:53:37.876546 2649 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:53:37.876735 kubelet[2649]: E0527 02:53:37.876655 2649 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mk7cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d6dbd985-lqqm7_calico-system(f1690d54-3bd7-4103-a321-b2466ca801ef): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:53:37.877887 kubelet[2649]: E0527 02:53:37.877834 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-78d6dbd985-lqqm7" podUID="f1690d54-3bd7-4103-a321-b2466ca801ef" May 27 02:53:38.002672 containerd[1526]: time="2025-05-27T02:53:38.002615163Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3da46409b40208ba48974be6badb9ab78fd51f411c1d82bfafe91ca15ca4cba8\" id:\"afa153eb2386c500c63cb3c8eb8d0364e40eaf70ae276769f5d4cf3fe834609a\" pid:3960 exit_status:1 exited_at:{seconds:1748314418 nanos:2088077}" May 27 02:53:38.218493 systemd-networkd[1439]: vxlan.calico: Link UP May 27 02:53:38.218500 systemd-networkd[1439]: vxlan.calico: Gained carrier May 27 02:53:38.641251 kubelet[2649]: I0527 02:53:38.641209 2649 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215" path="/var/lib/kubelet/pods/3a13e2c8-3ee1-4d3c-8d96-1e0ff5c9e215/volumes" May 27 02:53:38.764105 kubelet[2649]: E0527 02:53:38.764060 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-78d6dbd985-lqqm7" podUID="f1690d54-3bd7-4103-a321-b2466ca801ef" May 27 02:53:38.845518 containerd[1526]: time="2025-05-27T02:53:38.845446330Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3da46409b40208ba48974be6badb9ab78fd51f411c1d82bfafe91ca15ca4cba8\" id:\"d2e0cdfd744704cfc34a34e20e87d69e05c73b457c0467cee990ccfe6ff7f1b6\" pid:4128 exit_status:1 exited_at:{seconds:1748314418 nanos:845108300}" May 27 02:53:39.279566 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL May 27 02:53:39.406457 systemd-networkd[1439]: calif03f6f985d3: Gained IPv6LL May 27 02:53:42.636581 containerd[1526]: time="2025-05-27T02:53:42.636518936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699dbd9b98-vpb9z,Uid:f1e80ce0-9104-4f5e-8461-05e2f0fe3a75,Namespace:calico-apiserver,Attempt:0,}" May 27 02:53:42.743260 systemd-networkd[1439]: cali00e9b082626: Link UP May 27 02:53:42.743852 systemd-networkd[1439]: cali00e9b082626: Gained carrier May 27 02:53:42.757109 containerd[1526]: 2025-05-27 02:53:42.674 [INFO][4149] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0 calico-apiserver-699dbd9b98- calico-apiserver f1e80ce0-9104-4f5e-8461-05e2f0fe3a75 787 0 2025-05-27 02:53:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699dbd9b98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-699dbd9b98-vpb9z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali00e9b082626 [] [] }} ContainerID="46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-vpb9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-" May 27 02:53:42.757109 containerd[1526]: 2025-05-27 02:53:42.674 [INFO][4149] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-vpb9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0" May 27 02:53:42.757109 containerd[1526]: 2025-05-27 02:53:42.702 [INFO][4164] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" HandleID="k8s-pod-network.46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" Workload="localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0" May 27 02:53:42.757306 containerd[1526]: 2025-05-27 02:53:42.702 [INFO][4164] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" HandleID="k8s-pod-network.46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" Workload="localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001af400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-699dbd9b98-vpb9z", "timestamp":"2025-05-27 02:53:42.702295723 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:53:42.757306 containerd[1526]: 2025-05-27 02:53:42.702 [INFO][4164] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:53:42.757306 containerd[1526]: 2025-05-27 02:53:42.702 [INFO][4164] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:53:42.757306 containerd[1526]: 2025-05-27 02:53:42.702 [INFO][4164] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 02:53:42.757306 containerd[1526]: 2025-05-27 02:53:42.713 [INFO][4164] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" host="localhost" May 27 02:53:42.757306 containerd[1526]: 2025-05-27 02:53:42.717 [INFO][4164] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 02:53:42.757306 containerd[1526]: 2025-05-27 02:53:42.722 [INFO][4164] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 02:53:42.757306 containerd[1526]: 2025-05-27 02:53:42.724 [INFO][4164] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 02:53:42.757306 containerd[1526]: 2025-05-27 02:53:42.726 [INFO][4164] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 02:53:42.757306 containerd[1526]: 2025-05-27 02:53:42.726 [INFO][4164] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" host="localhost" May 27 02:53:42.757526 containerd[1526]: 2025-05-27 02:53:42.728 [INFO][4164] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92 May 27 02:53:42.757526 containerd[1526]: 2025-05-27 02:53:42.731 [INFO][4164] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" host="localhost" May 27 02:53:42.757526 containerd[1526]: 2025-05-27 02:53:42.739 [INFO][4164] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" host="localhost" May 27 02:53:42.757526 containerd[1526]: 2025-05-27 02:53:42.739 [INFO][4164] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" host="localhost" May 27 02:53:42.757526 containerd[1526]: 2025-05-27 02:53:42.739 [INFO][4164] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:53:42.757526 containerd[1526]: 2025-05-27 02:53:42.739 [INFO][4164] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" HandleID="k8s-pod-network.46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" Workload="localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0" May 27 02:53:42.757745 containerd[1526]: 2025-05-27 02:53:42.741 [INFO][4149] cni-plugin/k8s.go 418: Populated endpoint ContainerID="46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-vpb9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0", GenerateName:"calico-apiserver-699dbd9b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1e80ce0-9104-4f5e-8461-05e2f0fe3a75", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699dbd9b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-699dbd9b98-vpb9z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00e9b082626", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:42.757809 containerd[1526]: 2025-05-27 02:53:42.741 [INFO][4149] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-vpb9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0" May 27 02:53:42.757809 containerd[1526]: 2025-05-27 02:53:42.741 [INFO][4149] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00e9b082626 ContainerID="46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-vpb9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0" May 27 02:53:42.757809 containerd[1526]: 2025-05-27 02:53:42.744 [INFO][4149] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-vpb9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0" May 27 02:53:42.758006 containerd[1526]: 2025-05-27 02:53:42.744 [INFO][4149] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-vpb9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0", GenerateName:"calico-apiserver-699dbd9b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"f1e80ce0-9104-4f5e-8461-05e2f0fe3a75", ResourceVersion:"787", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699dbd9b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92", Pod:"calico-apiserver-699dbd9b98-vpb9z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali00e9b082626", MAC:"72:97:c4:b1:5d:1e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:42.758078 containerd[1526]: 2025-05-27 02:53:42.753 [INFO][4149] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-vpb9z" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--vpb9z-eth0" May 27 02:53:42.781120 containerd[1526]: time="2025-05-27T02:53:42.780880952Z" level=info msg="connecting to shim 46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92" address="unix:///run/containerd/s/6f590b7cd204b82c1df3f6b7c7432a4195aeaf5369212f8dc5ea6bae148a2d4a" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:42.803574 systemd[1]: Started cri-containerd-46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92.scope - libcontainer container 46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92. May 27 02:53:42.814179 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:53:42.834043 containerd[1526]: time="2025-05-27T02:53:42.833988948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699dbd9b98-vpb9z,Uid:f1e80ce0-9104-4f5e-8461-05e2f0fe3a75,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92\"" May 27 02:53:42.835558 containerd[1526]: time="2025-05-27T02:53:42.835529029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\"" May 27 02:53:43.635596 containerd[1526]: time="2025-05-27T02:53:43.635556950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-nqjnl,Uid:88e5918f-832b-4950-9cef-4b797dda53c9,Namespace:calico-system,Attempt:0,}" May 27 02:53:43.635787 containerd[1526]: time="2025-05-27T02:53:43.635569431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j75ql,Uid:1cf6eb9f-a9dc-4c12-98a3-c40432c7841b,Namespace:kube-system,Attempt:0,}" May 27 02:53:43.774929 systemd-networkd[1439]: calidc075cbe666: Link UP May 27 02:53:43.775087 systemd-networkd[1439]: calidc075cbe666: Gained carrier May 27 02:53:43.795627 containerd[1526]: 2025-05-27 02:53:43.682 [INFO][4234] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0 goldmane-78d55f7ddc- calico-system 88e5918f-832b-4950-9cef-4b797dda53c9 782 0 2025-05-27 02:53:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:78d55f7ddc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-78d55f7ddc-nqjnl eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calidc075cbe666 [] [] }} ContainerID="247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" Namespace="calico-system" Pod="goldmane-78d55f7ddc-nqjnl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--nqjnl-" May 27 02:53:43.795627 containerd[1526]: 2025-05-27 02:53:43.682 [INFO][4234] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" Namespace="calico-system" Pod="goldmane-78d55f7ddc-nqjnl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0" May 27 02:53:43.795627 containerd[1526]: 2025-05-27 02:53:43.718 [INFO][4262] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" HandleID="k8s-pod-network.247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" Workload="localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0" May 27 02:53:43.796044 containerd[1526]: 2025-05-27 02:53:43.718 [INFO][4262] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" HandleID="k8s-pod-network.247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" Workload="localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034f620), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-78d55f7ddc-nqjnl", "timestamp":"2025-05-27 02:53:43.718102032 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:53:43.796044 containerd[1526]: 2025-05-27 02:53:43.718 [INFO][4262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:53:43.796044 containerd[1526]: 2025-05-27 02:53:43.718 [INFO][4262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:53:43.796044 containerd[1526]: 2025-05-27 02:53:43.718 [INFO][4262] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 02:53:43.796044 containerd[1526]: 2025-05-27 02:53:43.731 [INFO][4262] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" host="localhost" May 27 02:53:43.796044 containerd[1526]: 2025-05-27 02:53:43.741 [INFO][4262] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 02:53:43.796044 containerd[1526]: 2025-05-27 02:53:43.745 [INFO][4262] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 02:53:43.796044 containerd[1526]: 2025-05-27 02:53:43.747 [INFO][4262] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 02:53:43.796044 containerd[1526]: 2025-05-27 02:53:43.750 [INFO][4262] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 02:53:43.796044 containerd[1526]: 2025-05-27 02:53:43.750 [INFO][4262] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" host="localhost" May 27 02:53:43.796295 containerd[1526]: 2025-05-27 02:53:43.752 [INFO][4262] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64 May 27 02:53:43.796295 containerd[1526]: 2025-05-27 02:53:43.757 [INFO][4262] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" host="localhost" May 27 02:53:43.796295 containerd[1526]: 2025-05-27 02:53:43.767 [INFO][4262] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" host="localhost" May 27 02:53:43.796295 containerd[1526]: 2025-05-27 02:53:43.767 [INFO][4262] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" host="localhost" May 27 02:53:43.796295 containerd[1526]: 2025-05-27 02:53:43.767 [INFO][4262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:53:43.796295 containerd[1526]: 2025-05-27 02:53:43.767 [INFO][4262] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" HandleID="k8s-pod-network.247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" Workload="localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0" May 27 02:53:43.796433 containerd[1526]: 2025-05-27 02:53:43.770 [INFO][4234] cni-plugin/k8s.go 418: Populated endpoint ContainerID="247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" Namespace="calico-system" Pod="goldmane-78d55f7ddc-nqjnl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"88e5918f-832b-4950-9cef-4b797dda53c9", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-78d55f7ddc-nqjnl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidc075cbe666", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:43.796433 containerd[1526]: 2025-05-27 02:53:43.770 [INFO][4234] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" Namespace="calico-system" Pod="goldmane-78d55f7ddc-nqjnl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0" May 27 02:53:43.796510 containerd[1526]: 2025-05-27 02:53:43.770 [INFO][4234] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc075cbe666 ContainerID="247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" Namespace="calico-system" Pod="goldmane-78d55f7ddc-nqjnl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0" May 27 02:53:43.796510 containerd[1526]: 2025-05-27 02:53:43.774 [INFO][4234] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" Namespace="calico-system" Pod="goldmane-78d55f7ddc-nqjnl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0" May 27 02:53:43.796549 containerd[1526]: 2025-05-27 02:53:43.774 [INFO][4234] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" Namespace="calico-system" Pod="goldmane-78d55f7ddc-nqjnl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0", GenerateName:"goldmane-78d55f7ddc-", Namespace:"calico-system", SelfLink:"", UID:"88e5918f-832b-4950-9cef-4b797dda53c9", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"78d55f7ddc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64", Pod:"goldmane-78d55f7ddc-nqjnl", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calidc075cbe666", MAC:"82:c0:86:83:98:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:43.796595 containerd[1526]: 2025-05-27 02:53:43.790 [INFO][4234] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" Namespace="calico-system" Pod="goldmane-78d55f7ddc-nqjnl" WorkloadEndpoint="localhost-k8s-goldmane--78d55f7ddc--nqjnl-eth0" May 27 02:53:43.821125 containerd[1526]: time="2025-05-27T02:53:43.821083709Z" level=info msg="connecting to shim 247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64" address="unix:///run/containerd/s/fac32afb3e9ecc0de7035e9ab261ffdbec7212b7107a9a5aa58f66c004b00de4" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:43.851543 systemd[1]: Started cri-containerd-247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64.scope - libcontainer container 247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64. May 27 02:53:43.866197 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:53:43.879476 systemd-networkd[1439]: calib6f66edf9fa: Link UP May 27 02:53:43.880455 systemd-networkd[1439]: calib6f66edf9fa: Gained carrier May 27 02:53:43.904540 containerd[1526]: 2025-05-27 02:53:43.697 [INFO][4251] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--j75ql-eth0 coredns-668d6bf9bc- kube-system 1cf6eb9f-a9dc-4c12-98a3-c40432c7841b 778 0 2025-05-27 02:53:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-j75ql eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib6f66edf9fa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j75ql" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j75ql-" May 27 02:53:43.904540 containerd[1526]: 2025-05-27 02:53:43.697 [INFO][4251] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j75ql" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j75ql-eth0" May 27 02:53:43.904540 containerd[1526]: 2025-05-27 02:53:43.737 [INFO][4268] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" HandleID="k8s-pod-network.5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" Workload="localhost-k8s-coredns--668d6bf9bc--j75ql-eth0" May 27 02:53:43.904716 containerd[1526]: 2025-05-27 02:53:43.737 [INFO][4268] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" HandleID="k8s-pod-network.5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" Workload="localhost-k8s-coredns--668d6bf9bc--j75ql-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002e14e0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-j75ql", "timestamp":"2025-05-27 02:53:43.737591475 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:53:43.904716 containerd[1526]: 2025-05-27 02:53:43.737 [INFO][4268] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:53:43.904716 containerd[1526]: 2025-05-27 02:53:43.767 [INFO][4268] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:53:43.904716 containerd[1526]: 2025-05-27 02:53:43.767 [INFO][4268] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 02:53:43.904716 containerd[1526]: 2025-05-27 02:53:43.831 [INFO][4268] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" host="localhost" May 27 02:53:43.904716 containerd[1526]: 2025-05-27 02:53:43.842 [INFO][4268] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 02:53:43.904716 containerd[1526]: 2025-05-27 02:53:43.849 [INFO][4268] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 02:53:43.904716 containerd[1526]: 2025-05-27 02:53:43.851 [INFO][4268] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 02:53:43.904716 containerd[1526]: 2025-05-27 02:53:43.853 [INFO][4268] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 02:53:43.904716 containerd[1526]: 2025-05-27 02:53:43.853 [INFO][4268] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" host="localhost" May 27 02:53:43.905456 containerd[1526]: 2025-05-27 02:53:43.855 [INFO][4268] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9 May 27 02:53:43.905456 containerd[1526]: 2025-05-27 02:53:43.861 [INFO][4268] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" host="localhost" May 27 02:53:43.905456 containerd[1526]: 2025-05-27 02:53:43.871 [INFO][4268] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" host="localhost" May 27 02:53:43.905456 containerd[1526]: 2025-05-27 02:53:43.871 [INFO][4268] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" host="localhost" May 27 02:53:43.905456 containerd[1526]: 2025-05-27 02:53:43.871 [INFO][4268] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:53:43.905456 containerd[1526]: 2025-05-27 02:53:43.871 [INFO][4268] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" HandleID="k8s-pod-network.5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" Workload="localhost-k8s-coredns--668d6bf9bc--j75ql-eth0" May 27 02:53:43.905582 containerd[1526]: 2025-05-27 02:53:43.873 [INFO][4251] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j75ql" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j75ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j75ql-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1cf6eb9f-a9dc-4c12-98a3-c40432c7841b", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-j75ql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6f66edf9fa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:43.905640 containerd[1526]: 2025-05-27 02:53:43.874 [INFO][4251] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j75ql" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j75ql-eth0" May 27 02:53:43.905640 containerd[1526]: 2025-05-27 02:53:43.874 [INFO][4251] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6f66edf9fa ContainerID="5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j75ql" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j75ql-eth0" May 27 02:53:43.905640 containerd[1526]: 2025-05-27 02:53:43.879 [INFO][4251] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j75ql" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j75ql-eth0" May 27 02:53:43.905700 containerd[1526]: 2025-05-27 02:53:43.879 [INFO][4251] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j75ql" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j75ql-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--j75ql-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1cf6eb9f-a9dc-4c12-98a3-c40432c7841b", ResourceVersion:"778", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9", Pod:"coredns-668d6bf9bc-j75ql", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6f66edf9fa", MAC:"a2:db:a5:3b:ac:8d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:43.905700 containerd[1526]: 2025-05-27 02:53:43.897 [INFO][4251] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" Namespace="kube-system" Pod="coredns-668d6bf9bc-j75ql" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--j75ql-eth0" May 27 02:53:43.907024 containerd[1526]: time="2025-05-27T02:53:43.906976766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-78d55f7ddc-nqjnl,Uid:88e5918f-832b-4950-9cef-4b797dda53c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"247bc01c1aa35d131abd4cd75b2fdc5a910dae8724ba8ad1a7226b1eede56b64\"" May 27 02:53:43.948913 containerd[1526]: time="2025-05-27T02:53:43.948864314Z" level=info msg="connecting to shim 5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9" address="unix:///run/containerd/s/9c552b2a262e86fac44cb047b87e77e858965597eaef9489665a127582e900bc" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:43.970479 systemd[1]: Started cri-containerd-5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9.scope - libcontainer container 5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9. May 27 02:53:43.982649 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:53:44.006034 containerd[1526]: time="2025-05-27T02:53:44.005991772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j75ql,Uid:1cf6eb9f-a9dc-4c12-98a3-c40432c7841b,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9\"" May 27 02:53:44.008863 containerd[1526]: time="2025-05-27T02:53:44.008829582Z" level=info msg="CreateContainer within sandbox \"5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 02:53:44.021376 containerd[1526]: time="2025-05-27T02:53:44.021336109Z" level=info msg="Container cebf68c071e7e4d922ce2fcd850f3c4e3f08fb2b52bbe304955c214b0521ed61: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:44.027200 containerd[1526]: time="2025-05-27T02:53:44.027150500Z" level=info msg="CreateContainer within sandbox \"5f9a78759aa06ed282b8d0882709526a08782ea52bbb383c34498c7f0df400b9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cebf68c071e7e4d922ce2fcd850f3c4e3f08fb2b52bbe304955c214b0521ed61\"" May 27 02:53:44.027976 containerd[1526]: time="2025-05-27T02:53:44.027941158Z" level=info msg="StartContainer for \"cebf68c071e7e4d922ce2fcd850f3c4e3f08fb2b52bbe304955c214b0521ed61\"" May 27 02:53:44.029008 containerd[1526]: time="2025-05-27T02:53:44.028961954Z" level=info msg="connecting to shim cebf68c071e7e4d922ce2fcd850f3c4e3f08fb2b52bbe304955c214b0521ed61" address="unix:///run/containerd/s/9c552b2a262e86fac44cb047b87e77e858965597eaef9489665a127582e900bc" protocol=ttrpc version=3 May 27 02:53:44.048497 systemd[1]: Started cri-containerd-cebf68c071e7e4d922ce2fcd850f3c4e3f08fb2b52bbe304955c214b0521ed61.scope - libcontainer container cebf68c071e7e4d922ce2fcd850f3c4e3f08fb2b52bbe304955c214b0521ed61. May 27 02:53:44.133450 containerd[1526]: time="2025-05-27T02:53:44.131643562Z" level=info msg="StartContainer for \"cebf68c071e7e4d922ce2fcd850f3c4e3f08fb2b52bbe304955c214b0521ed61\" returns successfully" May 27 02:53:44.335234 systemd-networkd[1439]: cali00e9b082626: Gained IPv6LL May 27 02:53:44.628698 containerd[1526]: time="2025-05-27T02:53:44.628581820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:44.629664 containerd[1526]: time="2025-05-27T02:53:44.629637698Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.0: active requests=0, bytes read=44453213" May 27 02:53:44.635750 containerd[1526]: time="2025-05-27T02:53:44.635693426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jq8vl,Uid:3e08f79c-1eaa-47ef-a241-1357b91487af,Namespace:calico-system,Attempt:0,}" May 27 02:53:44.636006 containerd[1526]: time="2025-05-27T02:53:44.635975367Z" level=info msg="ImageCreate event name:\"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:44.636934 containerd[1526]: time="2025-05-27T02:53:44.636899076Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" with image id \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\", size \"45822470\" in 1.801333964s" May 27 02:53:44.636934 containerd[1526]: time="2025-05-27T02:53:44.636933038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.0\" returns image reference \"sha256:0d503660232383641bf9af3b7e4ef066c0e96a8ec586f123e5b56b6a196c983d\"" May 27 02:53:44.637670 containerd[1526]: time="2025-05-27T02:53:44.637645971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ad7d2e76f15777636c5d91c108d7655659b38fe8970255050ffa51223eb96ff4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:44.638892 containerd[1526]: time="2025-05-27T02:53:44.638808617Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 02:53:44.645195 containerd[1526]: time="2025-05-27T02:53:44.645126445Z" level=info msg="CreateContainer within sandbox \"46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 02:53:44.655922 containerd[1526]: time="2025-05-27T02:53:44.655807917Z" level=info msg="Container dc8fe07d273347906230bf56f4f49a276b4e9ea5dbddff5803971fc1307d2583: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:44.662324 containerd[1526]: time="2025-05-27T02:53:44.662273516Z" level=info msg="CreateContainer within sandbox \"46b569ba61d895a2bc6f8b6533a9599d59afac6248712283f90597f90eb49c92\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dc8fe07d273347906230bf56f4f49a276b4e9ea5dbddff5803971fc1307d2583\"" May 27 02:53:44.663707 containerd[1526]: time="2025-05-27T02:53:44.663672379Z" level=info msg="StartContainer for \"dc8fe07d273347906230bf56f4f49a276b4e9ea5dbddff5803971fc1307d2583\"" May 27 02:53:44.665923 containerd[1526]: time="2025-05-27T02:53:44.665883903Z" level=info msg="connecting to shim dc8fe07d273347906230bf56f4f49a276b4e9ea5dbddff5803971fc1307d2583" address="unix:///run/containerd/s/6f590b7cd204b82c1df3f6b7c7432a4195aeaf5369212f8dc5ea6bae148a2d4a" protocol=ttrpc version=3 May 27 02:53:44.714590 systemd[1]: Started cri-containerd-dc8fe07d273347906230bf56f4f49a276b4e9ea5dbddff5803971fc1307d2583.scope - libcontainer container dc8fe07d273347906230bf56f4f49a276b4e9ea5dbddff5803971fc1307d2583. May 27 02:53:44.767402 containerd[1526]: time="2025-05-27T02:53:44.767364502Z" level=info msg="StartContainer for \"dc8fe07d273347906230bf56f4f49a276b4e9ea5dbddff5803971fc1307d2583\" returns successfully" May 27 02:53:44.780812 systemd-networkd[1439]: caliee1cab4ed3e: Link UP May 27 02:53:44.783084 systemd-networkd[1439]: caliee1cab4ed3e: Gained carrier May 27 02:53:44.802333 kubelet[2649]: I0527 02:53:44.802079 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-699dbd9b98-vpb9z" podStartSLOduration=22.999510617 podStartE2EDuration="24.802060913s" podCreationTimestamp="2025-05-27 02:53:20 +0000 UTC" firstStartedPulling="2025-05-27 02:53:42.835278369 +0000 UTC m=+38.276422576" lastFinishedPulling="2025-05-27 02:53:44.637828665 +0000 UTC m=+40.078972872" observedRunningTime="2025-05-27 02:53:44.799789464 +0000 UTC m=+40.240933671" watchObservedRunningTime="2025-05-27 02:53:44.802060913 +0000 UTC m=+40.243205120" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.683 [INFO][4436] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--jq8vl-eth0 csi-node-driver- calico-system 3e08f79c-1eaa-47ef-a241-1357b91487af 679 0 2025-05-27 02:53:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:78f6f74485 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-jq8vl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliee1cab4ed3e [] [] }} ContainerID="1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" Namespace="calico-system" Pod="csi-node-driver-jq8vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jq8vl-" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.683 [INFO][4436] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" Namespace="calico-system" Pod="csi-node-driver-jq8vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jq8vl-eth0" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.721 [INFO][4454] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" HandleID="k8s-pod-network.1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" Workload="localhost-k8s-csi--node--driver--jq8vl-eth0" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.721 [INFO][4454] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" HandleID="k8s-pod-network.1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" Workload="localhost-k8s-csi--node--driver--jq8vl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002a9070), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-jq8vl", "timestamp":"2025-05-27 02:53:44.721557468 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.721 [INFO][4454] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.722 [INFO][4454] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.722 [INFO][4454] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.734 [INFO][4454] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" host="localhost" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.740 [INFO][4454] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.745 [INFO][4454] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.747 [INFO][4454] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.750 [INFO][4454] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.750 [INFO][4454] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" host="localhost" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.753 [INFO][4454] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77 May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.759 [INFO][4454] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" host="localhost" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.771 [INFO][4454] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" host="localhost" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.771 [INFO][4454] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" host="localhost" May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.771 [INFO][4454] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:53:44.807834 containerd[1526]: 2025-05-27 02:53:44.772 [INFO][4454] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" HandleID="k8s-pod-network.1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" Workload="localhost-k8s-csi--node--driver--jq8vl-eth0" May 27 02:53:44.809115 containerd[1526]: 2025-05-27 02:53:44.776 [INFO][4436] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" Namespace="calico-system" Pod="csi-node-driver-jq8vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jq8vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jq8vl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e08f79c-1eaa-47ef-a241-1357b91487af", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-jq8vl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee1cab4ed3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:44.809115 containerd[1526]: 2025-05-27 02:53:44.776 [INFO][4436] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" Namespace="calico-system" Pod="csi-node-driver-jq8vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jq8vl-eth0" May 27 02:53:44.809115 containerd[1526]: 2025-05-27 02:53:44.776 [INFO][4436] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee1cab4ed3e ContainerID="1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" Namespace="calico-system" Pod="csi-node-driver-jq8vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jq8vl-eth0" May 27 02:53:44.809115 containerd[1526]: 2025-05-27 02:53:44.783 [INFO][4436] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" Namespace="calico-system" Pod="csi-node-driver-jq8vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jq8vl-eth0" May 27 02:53:44.809115 containerd[1526]: 2025-05-27 02:53:44.785 [INFO][4436] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" Namespace="calico-system" Pod="csi-node-driver-jq8vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jq8vl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--jq8vl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e08f79c-1eaa-47ef-a241-1357b91487af", ResourceVersion:"679", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"78f6f74485", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77", Pod:"csi-node-driver-jq8vl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliee1cab4ed3e", MAC:"2a:dc:9c:85:b7:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:44.809115 containerd[1526]: 2025-05-27 02:53:44.801 [INFO][4436] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" Namespace="calico-system" Pod="csi-node-driver-jq8vl" WorkloadEndpoint="localhost-k8s-csi--node--driver--jq8vl-eth0" May 27 02:53:44.819507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2391680584.mount: Deactivated successfully. May 27 02:53:44.823442 kubelet[2649]: I0527 02:53:44.822909 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j75ql" podStartSLOduration=33.822891176 podStartE2EDuration="33.822891176s" podCreationTimestamp="2025-05-27 02:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:53:44.822659719 +0000 UTC m=+40.263803926" watchObservedRunningTime="2025-05-27 02:53:44.822891176 +0000 UTC m=+40.264035343" May 27 02:53:44.842035 containerd[1526]: time="2025-05-27T02:53:44.841978630Z" level=info msg="connecting to shim 1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77" address="unix:///run/containerd/s/5d86697f7791268fe01a7490f69f099129d085ba92410e8b2170ba2eaee7f49a" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:44.843270 containerd[1526]: time="2025-05-27T02:53:44.843240564Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:53:44.845386 containerd[1526]: time="2025-05-27T02:53:44.844644348Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:53:44.845707 containerd[1526]: time="2025-05-27T02:53:44.845542294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 02:53:44.845964 kubelet[2649]: E0527 02:53:44.845921 2649 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:53:44.846046 kubelet[2649]: E0527 02:53:44.845975 2649 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:53:44.846195 kubelet[2649]: E0527 02:53:44.846109 2649 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xth2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-nqjnl_calico-system(88e5918f-832b-4950-9cef-4b797dda53c9): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:53:44.848657 kubelet[2649]: E0527 02:53:44.848369 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-nqjnl" podUID="88e5918f-832b-4950-9cef-4b797dda53c9" May 27 02:53:44.881541 systemd[1]: Started cri-containerd-1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77.scope - libcontainer container 1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77. May 27 02:53:44.901539 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:53:44.920641 containerd[1526]: time="2025-05-27T02:53:44.920584494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jq8vl,Uid:3e08f79c-1eaa-47ef-a241-1357b91487af,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77\"" May 27 02:53:44.923085 containerd[1526]: time="2025-05-27T02:53:44.923051437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 27 02:53:45.102963 systemd-networkd[1439]: calidc075cbe666: Gained IPv6LL May 27 02:53:45.166604 systemd-networkd[1439]: calib6f66edf9fa: Gained IPv6LL May 27 02:53:45.634998 containerd[1526]: time="2025-05-27T02:53:45.634956945Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b44f47cd-9h674,Uid:fa3f2bac-b19c-48be-9c4a-939e92dbf744,Namespace:calico-system,Attempt:0,}" May 27 02:53:45.752350 systemd-networkd[1439]: cali468b981a0bf: Link UP May 27 02:53:45.753294 systemd-networkd[1439]: cali468b981a0bf: Gained carrier May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.674 [INFO][4558] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0 calico-kube-controllers-74b44f47cd- calico-system fa3f2bac-b19c-48be-9c4a-939e92dbf744 784 0 2025-05-27 02:53:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74b44f47cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-74b44f47cd-9h674 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali468b981a0bf [] [] }} ContainerID="246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" Namespace="calico-system" Pod="calico-kube-controllers-74b44f47cd-9h674" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.674 [INFO][4558] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" Namespace="calico-system" Pod="calico-kube-controllers-74b44f47cd-9h674" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.708 [INFO][4571] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" HandleID="k8s-pod-network.246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" Workload="localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.708 [INFO][4571] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" HandleID="k8s-pod-network.246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" Workload="localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-74b44f47cd-9h674", "timestamp":"2025-05-27 02:53:45.708545818 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.708 [INFO][4571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.708 [INFO][4571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.708 [INFO][4571] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.718 [INFO][4571] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" host="localhost" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.723 [INFO][4571] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.728 [INFO][4571] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.729 [INFO][4571] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.732 [INFO][4571] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.732 [INFO][4571] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" host="localhost" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.734 [INFO][4571] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.738 [INFO][4571] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" host="localhost" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.746 [INFO][4571] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" host="localhost" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.746 [INFO][4571] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" host="localhost" May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.746 [INFO][4571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:53:45.772608 containerd[1526]: 2025-05-27 02:53:45.746 [INFO][4571] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" HandleID="k8s-pod-network.246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" Workload="localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0" May 27 02:53:45.773146 containerd[1526]: 2025-05-27 02:53:45.748 [INFO][4558] cni-plugin/k8s.go 418: Populated endpoint ContainerID="246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" Namespace="calico-system" Pod="calico-kube-controllers-74b44f47cd-9h674" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0", GenerateName:"calico-kube-controllers-74b44f47cd-", Namespace:"calico-system", SelfLink:"", UID:"fa3f2bac-b19c-48be-9c4a-939e92dbf744", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b44f47cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-74b44f47cd-9h674", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali468b981a0bf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:45.773146 containerd[1526]: 2025-05-27 02:53:45.749 [INFO][4558] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" Namespace="calico-system" Pod="calico-kube-controllers-74b44f47cd-9h674" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0" May 27 02:53:45.773146 containerd[1526]: 2025-05-27 02:53:45.749 [INFO][4558] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali468b981a0bf ContainerID="246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" Namespace="calico-system" Pod="calico-kube-controllers-74b44f47cd-9h674" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0" May 27 02:53:45.773146 containerd[1526]: 2025-05-27 02:53:45.754 [INFO][4558] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" Namespace="calico-system" Pod="calico-kube-controllers-74b44f47cd-9h674" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0" May 27 02:53:45.773146 containerd[1526]: 2025-05-27 02:53:45.754 [INFO][4558] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" Namespace="calico-system" Pod="calico-kube-controllers-74b44f47cd-9h674" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0", GenerateName:"calico-kube-controllers-74b44f47cd-", Namespace:"calico-system", SelfLink:"", UID:"fa3f2bac-b19c-48be-9c4a-939e92dbf744", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74b44f47cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f", Pod:"calico-kube-controllers-74b44f47cd-9h674", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali468b981a0bf", MAC:"4e:6f:b8:9f:ec:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:45.773146 containerd[1526]: 2025-05-27 02:53:45.769 [INFO][4558] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" Namespace="calico-system" Pod="calico-kube-controllers-74b44f47cd-9h674" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--74b44f47cd--9h674-eth0" May 27 02:53:45.795380 kubelet[2649]: E0527 02:53:45.795337 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-nqjnl" podUID="88e5918f-832b-4950-9cef-4b797dda53c9" May 27 02:53:45.804815 kubelet[2649]: I0527 02:53:45.804706 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 02:53:45.805411 containerd[1526]: time="2025-05-27T02:53:45.805277322Z" level=info msg="connecting to shim 246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f" address="unix:///run/containerd/s/3a935c3f51ac3a5c9c62d147856dd2d6faf90361a40f4219077b83fd1de9bdb6" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:45.839524 systemd[1]: Started cri-containerd-246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f.scope - libcontainer container 246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f. May 27 02:53:45.854265 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:53:45.875146 containerd[1526]: time="2025-05-27T02:53:45.875105484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74b44f47cd-9h674,Uid:fa3f2bac-b19c-48be-9c4a-939e92dbf744,Namespace:calico-system,Attempt:0,} returns sandbox id \"246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f\"" May 27 02:53:45.998630 systemd-networkd[1439]: caliee1cab4ed3e: Gained IPv6LL May 27 02:53:46.321847 containerd[1526]: time="2025-05-27T02:53:46.321193086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:46.321954 containerd[1526]: time="2025-05-27T02:53:46.321916737Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8226240" May 27 02:53:46.325677 containerd[1526]: time="2025-05-27T02:53:46.325643439Z" level=info msg="ImageCreate event name:\"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:46.332027 containerd[1526]: time="2025-05-27T02:53:46.331977486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"9595481\" in 1.408885806s" May 27 02:53:46.332027 containerd[1526]: time="2025-05-27T02:53:46.332020449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:ebe7e098653491dec9f15f87d7f5d33f47b09d1d6f3ef83deeaaa6237024c045\"" May 27 02:53:46.334416 containerd[1526]: time="2025-05-27T02:53:46.333133087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\"" May 27 02:53:46.336571 containerd[1526]: time="2025-05-27T02:53:46.336533966Z" level=info msg="CreateContainer within sandbox \"1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 27 02:53:46.339693 containerd[1526]: time="2025-05-27T02:53:46.339124869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:46.345162 containerd[1526]: time="2025-05-27T02:53:46.344919557Z" level=info msg="Container 64704d65e1ac2fd64ef904b52a5bccfeaeabd142adffd9a58b6ab4e25995d8dd: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:46.358013 containerd[1526]: time="2025-05-27T02:53:46.357851948Z" level=info msg="CreateContainer within sandbox \"1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"64704d65e1ac2fd64ef904b52a5bccfeaeabd142adffd9a58b6ab4e25995d8dd\"" May 27 02:53:46.358574 containerd[1526]: time="2025-05-27T02:53:46.358546797Z" level=info msg="StartContainer for \"64704d65e1ac2fd64ef904b52a5bccfeaeabd142adffd9a58b6ab4e25995d8dd\"" May 27 02:53:46.361692 containerd[1526]: time="2025-05-27T02:53:46.361554369Z" level=info msg="connecting to shim 64704d65e1ac2fd64ef904b52a5bccfeaeabd142adffd9a58b6ab4e25995d8dd" address="unix:///run/containerd/s/5d86697f7791268fe01a7490f69f099129d085ba92410e8b2170ba2eaee7f49a" protocol=ttrpc version=3 May 27 02:53:46.384481 systemd[1]: Started cri-containerd-64704d65e1ac2fd64ef904b52a5bccfeaeabd142adffd9a58b6ab4e25995d8dd.scope - libcontainer container 64704d65e1ac2fd64ef904b52a5bccfeaeabd142adffd9a58b6ab4e25995d8dd. May 27 02:53:46.427877 containerd[1526]: time="2025-05-27T02:53:46.424949714Z" level=info msg="StartContainer for \"64704d65e1ac2fd64ef904b52a5bccfeaeabd142adffd9a58b6ab4e25995d8dd\" returns successfully" May 27 02:53:46.635196 containerd[1526]: time="2025-05-27T02:53:46.634983467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nwmms,Uid:db1824d8-31c5-478d-b9fc-7ea21fbef306,Namespace:kube-system,Attempt:0,}" May 27 02:53:46.635438 containerd[1526]: time="2025-05-27T02:53:46.635353533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699dbd9b98-5tz6h,Uid:2f5769c9-834e-4b2c-8b1f-7f5c674d09af,Namespace:calico-apiserver,Attempt:0,}" May 27 02:53:46.934999 systemd[1]: Started sshd@7-10.0.0.73:22-10.0.0.1:36884.service - OpenSSH per-connection server daemon (10.0.0.1:36884). May 27 02:53:47.022578 systemd-networkd[1439]: cali468b981a0bf: Gained IPv6LL May 27 02:53:47.027537 sshd[4686]: Accepted publickey for core from 10.0.0.1 port 36884 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:53:47.030022 sshd-session[4686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:53:47.041860 systemd-logind[1506]: New session 8 of user core. May 27 02:53:47.047565 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 02:53:47.101515 systemd-networkd[1439]: cali753450a06fb: Link UP May 27 02:53:47.102204 systemd-networkd[1439]: cali753450a06fb: Gained carrier May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:46.995 [INFO][4675] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0 calico-apiserver-699dbd9b98- calico-apiserver 2f5769c9-834e-4b2c-8b1f-7f5c674d09af 786 0 2025-05-27 02:53:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:699dbd9b98 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-699dbd9b98-5tz6h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali753450a06fb [] [] }} ContainerID="ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-5tz6h" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:46.995 [INFO][4675] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-5tz6h" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.035 [INFO][4703] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" HandleID="k8s-pod-network.ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" Workload="localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.035 [INFO][4703] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" HandleID="k8s-pod-network.ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" Workload="localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b4590), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-699dbd9b98-5tz6h", "timestamp":"2025-05-27 02:53:47.035689192 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.035 [INFO][4703] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.035 [INFO][4703] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.035 [INFO][4703] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.053 [INFO][4703] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" host="localhost" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.059 [INFO][4703] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.066 [INFO][4703] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.069 [INFO][4703] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.071 [INFO][4703] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.071 [INFO][4703] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" host="localhost" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.073 [INFO][4703] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.079 [INFO][4703] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" host="localhost" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.090 [INFO][4703] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" host="localhost" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.090 [INFO][4703] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" host="localhost" May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.090 [INFO][4703] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:53:47.123617 containerd[1526]: 2025-05-27 02:53:47.090 [INFO][4703] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" HandleID="k8s-pod-network.ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" Workload="localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0" May 27 02:53:47.124299 containerd[1526]: 2025-05-27 02:53:47.094 [INFO][4675] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-5tz6h" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0", GenerateName:"calico-apiserver-699dbd9b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f5769c9-834e-4b2c-8b1f-7f5c674d09af", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699dbd9b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-699dbd9b98-5tz6h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali753450a06fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:47.124299 containerd[1526]: 2025-05-27 02:53:47.094 [INFO][4675] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-5tz6h" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0" May 27 02:53:47.124299 containerd[1526]: 2025-05-27 02:53:47.094 [INFO][4675] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali753450a06fb ContainerID="ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-5tz6h" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0" May 27 02:53:47.124299 containerd[1526]: 2025-05-27 02:53:47.104 [INFO][4675] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-5tz6h" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0" May 27 02:53:47.124299 containerd[1526]: 2025-05-27 02:53:47.104 [INFO][4675] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-5tz6h" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0", GenerateName:"calico-apiserver-699dbd9b98-", Namespace:"calico-apiserver", SelfLink:"", UID:"2f5769c9-834e-4b2c-8b1f-7f5c674d09af", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"699dbd9b98", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e", Pod:"calico-apiserver-699dbd9b98-5tz6h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali753450a06fb", MAC:"6a:5d:e0:1f:2c:83", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:47.124299 containerd[1526]: 2025-05-27 02:53:47.118 [INFO][4675] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" Namespace="calico-apiserver" Pod="calico-apiserver-699dbd9b98-5tz6h" WorkloadEndpoint="localhost-k8s-calico--apiserver--699dbd9b98--5tz6h-eth0" May 27 02:53:47.152265 containerd[1526]: time="2025-05-27T02:53:47.152034713Z" level=info msg="connecting to shim ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e" address="unix:///run/containerd/s/6e4dd955886968214286b93f7243530d05cdda69b78e24732f69acab213e2929" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:47.189546 systemd[1]: Started cri-containerd-ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e.scope - libcontainer container ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e. May 27 02:53:47.204591 systemd-networkd[1439]: cali0a3edfca91b: Link UP May 27 02:53:47.208564 systemd-networkd[1439]: cali0a3edfca91b: Gained carrier May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.049 [INFO][4690] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--nwmms-eth0 coredns-668d6bf9bc- kube-system db1824d8-31c5-478d-b9fc-7ea21fbef306 785 0 2025-05-27 02:53:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-nwmms eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0a3edfca91b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwmms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nwmms-" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.049 [INFO][4690] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwmms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nwmms-eth0" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.081 [INFO][4715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" HandleID="k8s-pod-network.85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" Workload="localhost-k8s-coredns--668d6bf9bc--nwmms-eth0" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.082 [INFO][4715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" HandleID="k8s-pod-network.85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" Workload="localhost-k8s-coredns--668d6bf9bc--nwmms-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dcf0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-nwmms", "timestamp":"2025-05-27 02:53:47.081866567 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.082 [INFO][4715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.090 [INFO][4715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.090 [INFO][4715] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.153 [INFO][4715] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" host="localhost" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.161 [INFO][4715] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.170 [INFO][4715] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.173 [INFO][4715] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.176 [INFO][4715] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.176 [INFO][4715] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" host="localhost" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.178 [INFO][4715] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.185 [INFO][4715] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" host="localhost" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.193 [INFO][4715] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" host="localhost" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.194 [INFO][4715] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" host="localhost" May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.194 [INFO][4715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 27 02:53:47.241647 containerd[1526]: 2025-05-27 02:53:47.194 [INFO][4715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" HandleID="k8s-pod-network.85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" Workload="localhost-k8s-coredns--668d6bf9bc--nwmms-eth0" May 27 02:53:47.242134 containerd[1526]: 2025-05-27 02:53:47.198 [INFO][4690] cni-plugin/k8s.go 418: Populated endpoint ContainerID="85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwmms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nwmms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nwmms-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"db1824d8-31c5-478d-b9fc-7ea21fbef306", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-nwmms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a3edfca91b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:47.242134 containerd[1526]: 2025-05-27 02:53:47.198 [INFO][4690] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwmms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nwmms-eth0" May 27 02:53:47.242134 containerd[1526]: 2025-05-27 02:53:47.198 [INFO][4690] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a3edfca91b ContainerID="85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwmms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nwmms-eth0" May 27 02:53:47.242134 containerd[1526]: 2025-05-27 02:53:47.207 [INFO][4690] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwmms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nwmms-eth0" May 27 02:53:47.242134 containerd[1526]: 2025-05-27 02:53:47.208 [INFO][4690] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwmms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nwmms-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--nwmms-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"db1824d8-31c5-478d-b9fc-7ea21fbef306", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.May, 27, 2, 53, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed", Pod:"coredns-668d6bf9bc-nwmms", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a3edfca91b", MAC:"26:10:0e:96:6f:42", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 27 02:53:47.242134 containerd[1526]: 2025-05-27 02:53:47.225 [INFO][4690] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" Namespace="kube-system" Pod="coredns-668d6bf9bc-nwmms" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--nwmms-eth0" May 27 02:53:47.271825 containerd[1526]: time="2025-05-27T02:53:47.271775108Z" level=info msg="connecting to shim 85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed" address="unix:///run/containerd/s/59b9704d6fff3b00f922712403c3ef12c8f1d0cd60dda3610855c72257b2f30b" namespace=k8s.io protocol=ttrpc version=3 May 27 02:53:47.304766 systemd[1]: Started cri-containerd-85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed.scope - libcontainer container 85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed. May 27 02:53:47.312586 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:53:47.323264 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:53:47.377194 containerd[1526]: time="2025-05-27T02:53:47.377097031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-nwmms,Uid:db1824d8-31c5-478d-b9fc-7ea21fbef306,Namespace:kube-system,Attempt:0,} returns sandbox id \"85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed\"" May 27 02:53:47.380719 containerd[1526]: time="2025-05-27T02:53:47.380680117Z" level=info msg="CreateContainer within sandbox \"85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 02:53:47.400934 sshd[4713]: Connection closed by 10.0.0.1 port 36884 May 27 02:53:47.401275 containerd[1526]: time="2025-05-27T02:53:47.400954391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-699dbd9b98-5tz6h,Uid:2f5769c9-834e-4b2c-8b1f-7f5c674d09af,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e\"" May 27 02:53:47.401589 sshd-session[4686]: pam_unix(sshd:session): session closed for user core May 27 02:53:47.405627 containerd[1526]: time="2025-05-27T02:53:47.405588390Z" level=info msg="CreateContainer within sandbox \"ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" May 27 02:53:47.408928 systemd[1]: sshd@7-10.0.0.73:22-10.0.0.1:36884.service: Deactivated successfully. May 27 02:53:47.411075 systemd[1]: session-8.scope: Deactivated successfully. May 27 02:53:47.413111 systemd-logind[1506]: Session 8 logged out. Waiting for processes to exit. May 27 02:53:47.414366 systemd-logind[1506]: Removed session 8. May 27 02:53:47.433290 containerd[1526]: time="2025-05-27T02:53:47.433047519Z" level=info msg="Container ad5a2881bda7fdbd1b3d2bdb8b182e5e2473d445dad6df4db09e78aed8420291: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:47.434414 containerd[1526]: time="2025-05-27T02:53:47.434369249Z" level=info msg="Container 0aee02fe35b1cb072fb41b03fb277f76e7e3a432b20293987288b2e87acda0e5: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:47.443995 containerd[1526]: time="2025-05-27T02:53:47.443828660Z" level=info msg="CreateContainer within sandbox \"ea25c7601e438cf876dc5460b813b852050920f124dfb0ed7b85c2a303dba56e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ad5a2881bda7fdbd1b3d2bdb8b182e5e2473d445dad6df4db09e78aed8420291\"" May 27 02:53:47.444485 containerd[1526]: time="2025-05-27T02:53:47.444391619Z" level=info msg="CreateContainer within sandbox \"85fc9ce7fce7b36015152d321e10fdd843ebf0d44e9b117e2a67784385b69aed\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0aee02fe35b1cb072fb41b03fb277f76e7e3a432b20293987288b2e87acda0e5\"" May 27 02:53:47.445574 containerd[1526]: time="2025-05-27T02:53:47.445460012Z" level=info msg="StartContainer for \"0aee02fe35b1cb072fb41b03fb277f76e7e3a432b20293987288b2e87acda0e5\"" May 27 02:53:47.445574 containerd[1526]: time="2025-05-27T02:53:47.445502495Z" level=info msg="StartContainer for \"ad5a2881bda7fdbd1b3d2bdb8b182e5e2473d445dad6df4db09e78aed8420291\"" May 27 02:53:47.451892 containerd[1526]: time="2025-05-27T02:53:47.451832890Z" level=info msg="connecting to shim 0aee02fe35b1cb072fb41b03fb277f76e7e3a432b20293987288b2e87acda0e5" address="unix:///run/containerd/s/59b9704d6fff3b00f922712403c3ef12c8f1d0cd60dda3610855c72257b2f30b" protocol=ttrpc version=3 May 27 02:53:47.457829 containerd[1526]: time="2025-05-27T02:53:47.457008046Z" level=info msg="connecting to shim ad5a2881bda7fdbd1b3d2bdb8b182e5e2473d445dad6df4db09e78aed8420291" address="unix:///run/containerd/s/6e4dd955886968214286b93f7243530d05cdda69b78e24732f69acab213e2929" protocol=ttrpc version=3 May 27 02:53:47.490624 systemd[1]: Started cri-containerd-0aee02fe35b1cb072fb41b03fb277f76e7e3a432b20293987288b2e87acda0e5.scope - libcontainer container 0aee02fe35b1cb072fb41b03fb277f76e7e3a432b20293987288b2e87acda0e5. May 27 02:53:47.507568 systemd[1]: Started cri-containerd-ad5a2881bda7fdbd1b3d2bdb8b182e5e2473d445dad6df4db09e78aed8420291.scope - libcontainer container ad5a2881bda7fdbd1b3d2bdb8b182e5e2473d445dad6df4db09e78aed8420291. May 27 02:53:47.564468 containerd[1526]: time="2025-05-27T02:53:47.564425674Z" level=info msg="StartContainer for \"0aee02fe35b1cb072fb41b03fb277f76e7e3a432b20293987288b2e87acda0e5\" returns successfully" May 27 02:53:47.623240 containerd[1526]: time="2025-05-27T02:53:47.623195675Z" level=info msg="StartContainer for \"ad5a2881bda7fdbd1b3d2bdb8b182e5e2473d445dad6df4db09e78aed8420291\" returns successfully" May 27 02:53:47.820754 kubelet[2649]: I0527 02:53:47.820523 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-nwmms" podStartSLOduration=36.820492164 podStartE2EDuration="36.820492164s" podCreationTimestamp="2025-05-27 02:53:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:53:47.81897878 +0000 UTC m=+43.260123067" watchObservedRunningTime="2025-05-27 02:53:47.820492164 +0000 UTC m=+43.261636411" May 27 02:53:47.857380 kubelet[2649]: I0527 02:53:47.857154 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-699dbd9b98-5tz6h" podStartSLOduration=27.857135524 podStartE2EDuration="27.857135524s" podCreationTimestamp="2025-05-27 02:53:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:53:47.856136935 +0000 UTC m=+43.297281142" watchObservedRunningTime="2025-05-27 02:53:47.857135524 +0000 UTC m=+43.298279731" May 27 02:53:48.521693 containerd[1526]: time="2025-05-27T02:53:48.521642093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:48.523320 containerd[1526]: time="2025-05-27T02:53:48.522352981Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.0: active requests=0, bytes read=48045219" May 27 02:53:48.523320 containerd[1526]: time="2025-05-27T02:53:48.523012145Z" level=info msg="ImageCreate event name:\"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:48.525623 containerd[1526]: time="2025-05-27T02:53:48.525591039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:48.526562 containerd[1526]: time="2025-05-27T02:53:48.526526622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" with image id \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:eb5bc5c9e7a71f1d8ea69bbcc8e54b84fb7ec1e32d919c8b148f80b770f20182\", size \"49414428\" in 2.193350492s" May 27 02:53:48.526610 containerd[1526]: time="2025-05-27T02:53:48.526570905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.0\" returns image reference \"sha256:4188fe2931435deda58a0dc1767a2f6ad2bb27e47662ccec626bd07006f56373\"" May 27 02:53:48.527522 containerd[1526]: time="2025-05-27T02:53:48.527404321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 27 02:53:48.537115 containerd[1526]: time="2025-05-27T02:53:48.537068730Z" level=info msg="CreateContainer within sandbox \"246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" May 27 02:53:48.545786 containerd[1526]: time="2025-05-27T02:53:48.545736433Z" level=info msg="Container 55e91eb4f3b31b9e8d757ec63c46cb2c527e814d8224d39e073ad6bfb25dd1cd: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:48.550851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1092602011.mount: Deactivated successfully. May 27 02:53:48.558604 systemd-networkd[1439]: cali0a3edfca91b: Gained IPv6LL May 27 02:53:48.566451 containerd[1526]: time="2025-05-27T02:53:48.566409222Z" level=info msg="CreateContainer within sandbox \"246a0bfa1fc0da158e7526e89cc0c74766fdc4c6b8ea1e567132e2e1245d864f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"55e91eb4f3b31b9e8d757ec63c46cb2c527e814d8224d39e073ad6bfb25dd1cd\"" May 27 02:53:48.566929 containerd[1526]: time="2025-05-27T02:53:48.566894255Z" level=info msg="StartContainer for \"55e91eb4f3b31b9e8d757ec63c46cb2c527e814d8224d39e073ad6bfb25dd1cd\"" May 27 02:53:48.568073 containerd[1526]: time="2025-05-27T02:53:48.568043532Z" level=info msg="connecting to shim 55e91eb4f3b31b9e8d757ec63c46cb2c527e814d8224d39e073ad6bfb25dd1cd" address="unix:///run/containerd/s/3a935c3f51ac3a5c9c62d147856dd2d6faf90361a40f4219077b83fd1de9bdb6" protocol=ttrpc version=3 May 27 02:53:48.596496 systemd[1]: Started cri-containerd-55e91eb4f3b31b9e8d757ec63c46cb2c527e814d8224d39e073ad6bfb25dd1cd.scope - libcontainer container 55e91eb4f3b31b9e8d757ec63c46cb2c527e814d8224d39e073ad6bfb25dd1cd. May 27 02:53:48.642844 containerd[1526]: time="2025-05-27T02:53:48.642800557Z" level=info msg="StartContainer for \"55e91eb4f3b31b9e8d757ec63c46cb2c527e814d8224d39e073ad6bfb25dd1cd\" returns successfully" May 27 02:53:48.751021 systemd-networkd[1439]: cali753450a06fb: Gained IPv6LL May 27 02:53:48.813295 kubelet[2649]: I0527 02:53:48.813189 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 02:53:48.843611 kubelet[2649]: I0527 02:53:48.843549 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-74b44f47cd-9h674" podStartSLOduration=23.193865362 podStartE2EDuration="25.843528689s" podCreationTimestamp="2025-05-27 02:53:23 +0000 UTC" firstStartedPulling="2025-05-27 02:53:45.87754294 +0000 UTC m=+41.318687147" lastFinishedPulling="2025-05-27 02:53:48.527206267 +0000 UTC m=+43.968350474" observedRunningTime="2025-05-27 02:53:48.843152184 +0000 UTC m=+44.284296431" watchObservedRunningTime="2025-05-27 02:53:48.843528689 +0000 UTC m=+44.284672896" May 27 02:53:49.054288 containerd[1526]: time="2025-05-27T02:53:49.054246935Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55e91eb4f3b31b9e8d757ec63c46cb2c527e814d8224d39e073ad6bfb25dd1cd\" id:\"a81b924a3a2b48fd9f2f0b3a069381e467a3b755536a768adcadecfa375ed694\" pid:4992 exited_at:{seconds:1748314429 nanos:49111758}" May 27 02:53:49.111238 containerd[1526]: time="2025-05-27T02:53:49.111126516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55e91eb4f3b31b9e8d757ec63c46cb2c527e814d8224d39e073ad6bfb25dd1cd\" id:\"fd7321ce15acadcb53ca24150bdb3caf625dcb3c00a37a8ac9ecdf7daaa29082\" pid:5015 exited_at:{seconds:1748314429 nanos:110912261}" May 27 02:53:49.670825 containerd[1526]: time="2025-05-27T02:53:49.670778917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:49.671602 containerd[1526]: time="2025-05-27T02:53:49.671562048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=13749925" May 27 02:53:49.672425 containerd[1526]: time="2025-05-27T02:53:49.672378622Z" level=info msg="ImageCreate event name:\"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:49.674558 containerd[1526]: time="2025-05-27T02:53:49.674513882Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:53:49.675011 containerd[1526]: time="2025-05-27T02:53:49.674968152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"15119118\" in 1.14753011s" May 27 02:53:49.675052 containerd[1526]: time="2025-05-27T02:53:49.675010155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:a5d5f2a68204ed0dbc50f8778616ee92a63c0e342d178a4620e6271484e5c8b2\"" May 27 02:53:49.680213 containerd[1526]: time="2025-05-27T02:53:49.680184055Z" level=info msg="CreateContainer within sandbox \"1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 27 02:53:49.689118 containerd[1526]: time="2025-05-27T02:53:49.687699029Z" level=info msg="Container c87a515d710149ea816f66b080106dd4bff4a22ce756957c3462f404600afd3d: CDI devices from CRI Config.CDIDevices: []" May 27 02:53:49.699391 containerd[1526]: time="2025-05-27T02:53:49.699356356Z" level=info msg="CreateContainer within sandbox \"1ed15fb747ff956a23aa08b92ca28f49d441c61942dd12f9a16b57fd8c929a77\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c87a515d710149ea816f66b080106dd4bff4a22ce756957c3462f404600afd3d\"" May 27 02:53:49.699903 containerd[1526]: time="2025-05-27T02:53:49.699872750Z" level=info msg="StartContainer for \"c87a515d710149ea816f66b080106dd4bff4a22ce756957c3462f404600afd3d\"" May 27 02:53:49.701449 containerd[1526]: time="2025-05-27T02:53:49.701421332Z" level=info msg="connecting to shim c87a515d710149ea816f66b080106dd4bff4a22ce756957c3462f404600afd3d" address="unix:///run/containerd/s/5d86697f7791268fe01a7490f69f099129d085ba92410e8b2170ba2eaee7f49a" protocol=ttrpc version=3 May 27 02:53:49.723465 systemd[1]: Started cri-containerd-c87a515d710149ea816f66b080106dd4bff4a22ce756957c3462f404600afd3d.scope - libcontainer container c87a515d710149ea816f66b080106dd4bff4a22ce756957c3462f404600afd3d. May 27 02:53:49.811932 containerd[1526]: time="2025-05-27T02:53:49.811894956Z" level=info msg="StartContainer for \"c87a515d710149ea816f66b080106dd4bff4a22ce756957c3462f404600afd3d\" returns successfully" May 27 02:53:49.889204 kubelet[2649]: I0527 02:53:49.889139 2649 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jq8vl" podStartSLOduration=22.134930879 podStartE2EDuration="26.889124234s" podCreationTimestamp="2025-05-27 02:53:23 +0000 UTC" firstStartedPulling="2025-05-27 02:53:44.921841067 +0000 UTC m=+40.362985274" lastFinishedPulling="2025-05-27 02:53:49.676034422 +0000 UTC m=+45.117178629" observedRunningTime="2025-05-27 02:53:49.888967584 +0000 UTC m=+45.330111751" watchObservedRunningTime="2025-05-27 02:53:49.889124234 +0000 UTC m=+45.330268441" May 27 02:53:50.709661 kubelet[2649]: I0527 02:53:50.709612 2649 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 27 02:53:50.709661 kubelet[2649]: I0527 02:53:50.709660 2649 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 27 02:53:52.413704 systemd[1]: Started sshd@8-10.0.0.73:22-10.0.0.1:36896.service - OpenSSH per-connection server daemon (10.0.0.1:36896). May 27 02:53:52.494573 sshd[5066]: Accepted publickey for core from 10.0.0.1 port 36896 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:53:52.495939 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:53:52.500371 systemd-logind[1506]: New session 9 of user core. May 27 02:53:52.508509 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 02:53:52.688025 sshd[5068]: Connection closed by 10.0.0.1 port 36896 May 27 02:53:52.688548 sshd-session[5066]: pam_unix(sshd:session): session closed for user core May 27 02:53:52.692096 systemd[1]: sshd@8-10.0.0.73:22-10.0.0.1:36896.service: Deactivated successfully. May 27 02:53:52.693758 systemd[1]: session-9.scope: Deactivated successfully. May 27 02:53:52.694370 systemd-logind[1506]: Session 9 logged out. Waiting for processes to exit. May 27 02:53:52.695786 systemd-logind[1506]: Removed session 9. May 27 02:53:53.439952 kubelet[2649]: I0527 02:53:53.439856 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 02:53:53.636580 containerd[1526]: time="2025-05-27T02:53:53.636533301Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 02:53:53.817319 containerd[1526]: time="2025-05-27T02:53:53.817249485Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:53:53.818470 containerd[1526]: time="2025-05-27T02:53:53.818416836Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:53:53.818541 containerd[1526]: time="2025-05-27T02:53:53.818466679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 02:53:53.818657 kubelet[2649]: E0527 02:53:53.818615 2649 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:53:53.818743 kubelet[2649]: E0527 02:53:53.818666 2649 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:53:53.818817 kubelet[2649]: E0527 02:53:53.818774 2649 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8d00a686b45b4d5f8d5d971f47327b2d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mk7cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d6dbd985-lqqm7_calico-system(f1690d54-3bd7-4103-a321-b2466ca801ef): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:53:53.820756 containerd[1526]: time="2025-05-27T02:53:53.820704535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 02:53:53.975292 containerd[1526]: time="2025-05-27T02:53:53.975243248Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:53:53.976038 containerd[1526]: time="2025-05-27T02:53:53.976006174Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:53:53.976181 containerd[1526]: time="2025-05-27T02:53:53.976068218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 02:53:53.976261 kubelet[2649]: E0527 02:53:53.976208 2649 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:53:53.976339 kubelet[2649]: E0527 02:53:53.976270 2649 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:53:53.976426 kubelet[2649]: E0527 02:53:53.976384 2649 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mk7cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d6dbd985-lqqm7_calico-system(f1690d54-3bd7-4103-a321-b2466ca801ef): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:53:53.977875 kubelet[2649]: E0527 02:53:53.977826 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-78d6dbd985-lqqm7" podUID="f1690d54-3bd7-4103-a321-b2466ca801ef" May 27 02:53:57.700420 systemd[1]: Started sshd@9-10.0.0.73:22-10.0.0.1:43952.service - OpenSSH per-connection server daemon (10.0.0.1:43952). May 27 02:53:57.761041 sshd[5089]: Accepted publickey for core from 10.0.0.1 port 43952 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:53:57.762774 sshd-session[5089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:53:57.768077 systemd-logind[1506]: New session 10 of user core. May 27 02:53:57.775463 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 02:53:57.972035 sshd[5091]: Connection closed by 10.0.0.1 port 43952 May 27 02:53:57.972476 sshd-session[5089]: pam_unix(sshd:session): session closed for user core May 27 02:53:57.985467 systemd[1]: sshd@9-10.0.0.73:22-10.0.0.1:43952.service: Deactivated successfully. May 27 02:53:57.987269 systemd[1]: session-10.scope: Deactivated successfully. May 27 02:53:57.988392 systemd-logind[1506]: Session 10 logged out. Waiting for processes to exit. May 27 02:53:57.991748 systemd[1]: Started sshd@10-10.0.0.73:22-10.0.0.1:43966.service - OpenSSH per-connection server daemon (10.0.0.1:43966). May 27 02:53:57.992918 systemd-logind[1506]: Removed session 10. May 27 02:53:58.061551 sshd[5105]: Accepted publickey for core from 10.0.0.1 port 43966 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:53:58.063394 sshd-session[5105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:53:58.068326 systemd-logind[1506]: New session 11 of user core. May 27 02:53:58.083505 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 02:53:58.298276 sshd[5107]: Connection closed by 10.0.0.1 port 43966 May 27 02:53:58.298984 sshd-session[5105]: pam_unix(sshd:session): session closed for user core May 27 02:53:58.313896 systemd[1]: sshd@10-10.0.0.73:22-10.0.0.1:43966.service: Deactivated successfully. May 27 02:53:58.319554 systemd[1]: session-11.scope: Deactivated successfully. May 27 02:53:58.320542 systemd-logind[1506]: Session 11 logged out. Waiting for processes to exit. May 27 02:53:58.327456 systemd[1]: Started sshd@11-10.0.0.73:22-10.0.0.1:43978.service - OpenSSH per-connection server daemon (10.0.0.1:43978). May 27 02:53:58.328961 systemd-logind[1506]: Removed session 11. May 27 02:53:58.385407 sshd[5118]: Accepted publickey for core from 10.0.0.1 port 43978 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:53:58.386894 sshd-session[5118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:53:58.391128 systemd-logind[1506]: New session 12 of user core. May 27 02:53:58.400523 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 02:53:58.579395 sshd[5120]: Connection closed by 10.0.0.1 port 43978 May 27 02:53:58.579878 sshd-session[5118]: pam_unix(sshd:session): session closed for user core May 27 02:53:58.584094 systemd[1]: sshd@11-10.0.0.73:22-10.0.0.1:43978.service: Deactivated successfully. May 27 02:53:58.586010 systemd[1]: session-12.scope: Deactivated successfully. May 27 02:53:58.586706 systemd-logind[1506]: Session 12 logged out. Waiting for processes to exit. May 27 02:53:58.588237 systemd-logind[1506]: Removed session 12. May 27 02:53:58.638017 containerd[1526]: time="2025-05-27T02:53:58.637908947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 02:53:58.794863 containerd[1526]: time="2025-05-27T02:53:58.794698665Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:53:58.795770 containerd[1526]: time="2025-05-27T02:53:58.795673320Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:53:58.795770 containerd[1526]: time="2025-05-27T02:53:58.795750764Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 02:53:58.796059 kubelet[2649]: E0527 02:53:58.796017 2649 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:53:58.796648 kubelet[2649]: E0527 02:53:58.796444 2649 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:53:58.796648 kubelet[2649]: E0527 02:53:58.796588 2649 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xth2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-nqjnl_calico-system(88e5918f-832b-4950-9cef-4b797dda53c9): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:53:58.797777 kubelet[2649]: E0527 02:53:58.797713 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-nqjnl" podUID="88e5918f-832b-4950-9cef-4b797dda53c9" May 27 02:54:02.994661 kubelet[2649]: I0527 02:54:02.994362 2649 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 27 02:54:03.593785 systemd[1]: Started sshd@12-10.0.0.73:22-10.0.0.1:53442.service - OpenSSH per-connection server daemon (10.0.0.1:53442). May 27 02:54:03.653809 sshd[5145]: Accepted publickey for core from 10.0.0.1 port 53442 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:54:03.657648 sshd-session[5145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:54:03.663533 systemd-logind[1506]: New session 13 of user core. May 27 02:54:03.676541 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 02:54:03.840858 sshd[5147]: Connection closed by 10.0.0.1 port 53442 May 27 02:54:03.841436 sshd-session[5145]: pam_unix(sshd:session): session closed for user core May 27 02:54:03.851565 systemd[1]: sshd@12-10.0.0.73:22-10.0.0.1:53442.service: Deactivated successfully. May 27 02:54:03.855829 systemd[1]: session-13.scope: Deactivated successfully. May 27 02:54:03.861292 systemd-logind[1506]: Session 13 logged out. Waiting for processes to exit. May 27 02:54:03.863704 systemd[1]: Started sshd@13-10.0.0.73:22-10.0.0.1:53448.service - OpenSSH per-connection server daemon (10.0.0.1:53448). May 27 02:54:03.864896 systemd-logind[1506]: Removed session 13. May 27 02:54:03.930187 sshd[5160]: Accepted publickey for core from 10.0.0.1 port 53448 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:54:03.930710 sshd-session[5160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:54:03.934800 systemd-logind[1506]: New session 14 of user core. May 27 02:54:03.949444 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 02:54:04.177207 sshd[5163]: Connection closed by 10.0.0.1 port 53448 May 27 02:54:04.177517 sshd-session[5160]: pam_unix(sshd:session): session closed for user core May 27 02:54:04.187909 systemd[1]: sshd@13-10.0.0.73:22-10.0.0.1:53448.service: Deactivated successfully. May 27 02:54:04.189624 systemd[1]: session-14.scope: Deactivated successfully. May 27 02:54:04.191243 systemd-logind[1506]: Session 14 logged out. Waiting for processes to exit. May 27 02:54:04.193793 systemd[1]: Started sshd@14-10.0.0.73:22-10.0.0.1:53450.service - OpenSSH per-connection server daemon (10.0.0.1:53450). May 27 02:54:04.201491 systemd-logind[1506]: Removed session 14. May 27 02:54:04.253645 sshd[5174]: Accepted publickey for core from 10.0.0.1 port 53450 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:54:04.254973 sshd-session[5174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:54:04.264659 systemd-logind[1506]: New session 15 of user core. May 27 02:54:04.270524 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 02:54:05.029384 sshd[5176]: Connection closed by 10.0.0.1 port 53450 May 27 02:54:05.030059 sshd-session[5174]: pam_unix(sshd:session): session closed for user core May 27 02:54:05.040924 systemd[1]: sshd@14-10.0.0.73:22-10.0.0.1:53450.service: Deactivated successfully. May 27 02:54:05.044840 systemd[1]: session-15.scope: Deactivated successfully. May 27 02:54:05.048004 systemd-logind[1506]: Session 15 logged out. Waiting for processes to exit. May 27 02:54:05.054681 systemd[1]: Started sshd@15-10.0.0.73:22-10.0.0.1:53452.service - OpenSSH per-connection server daemon (10.0.0.1:53452). May 27 02:54:05.057330 systemd-logind[1506]: Removed session 15. May 27 02:54:05.115005 sshd[5198]: Accepted publickey for core from 10.0.0.1 port 53452 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:54:05.116440 sshd-session[5198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:54:05.121021 systemd-logind[1506]: New session 16 of user core. May 27 02:54:05.128504 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 02:54:05.456051 sshd[5200]: Connection closed by 10.0.0.1 port 53452 May 27 02:54:05.455922 sshd-session[5198]: pam_unix(sshd:session): session closed for user core May 27 02:54:05.468678 systemd[1]: sshd@15-10.0.0.73:22-10.0.0.1:53452.service: Deactivated successfully. May 27 02:54:05.472926 systemd[1]: session-16.scope: Deactivated successfully. May 27 02:54:05.474873 systemd-logind[1506]: Session 16 logged out. Waiting for processes to exit. May 27 02:54:05.476574 systemd-logind[1506]: Removed session 16. May 27 02:54:05.478555 systemd[1]: Started sshd@16-10.0.0.73:22-10.0.0.1:53468.service - OpenSSH per-connection server daemon (10.0.0.1:53468). May 27 02:54:05.535857 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 53468 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:54:05.537343 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:54:05.542391 systemd-logind[1506]: New session 17 of user core. May 27 02:54:05.553474 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 02:54:05.688327 sshd[5214]: Connection closed by 10.0.0.1 port 53468 May 27 02:54:05.688912 sshd-session[5212]: pam_unix(sshd:session): session closed for user core May 27 02:54:05.692394 systemd[1]: sshd@16-10.0.0.73:22-10.0.0.1:53468.service: Deactivated successfully. May 27 02:54:05.695870 systemd[1]: session-17.scope: Deactivated successfully. May 27 02:54:05.696578 systemd-logind[1506]: Session 17 logged out. Waiting for processes to exit. May 27 02:54:05.697599 systemd-logind[1506]: Removed session 17. May 27 02:54:06.352933 containerd[1526]: time="2025-05-27T02:54:06.352894570Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55e91eb4f3b31b9e8d757ec63c46cb2c527e814d8224d39e073ad6bfb25dd1cd\" id:\"af7cf33c44291cbc4212a5640e7e648c905dd70d1a7dc49128038e2038b85f53\" pid:5237 exited_at:{seconds:1748314446 nanos:352685240}" May 27 02:54:06.638277 kubelet[2649]: E0527 02:54:06.637990 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-78d6dbd985-lqqm7" podUID="f1690d54-3bd7-4103-a321-b2466ca801ef" May 27 02:54:08.844434 containerd[1526]: time="2025-05-27T02:54:08.844394725Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3da46409b40208ba48974be6badb9ab78fd51f411c1d82bfafe91ca15ca4cba8\" id:\"35f6ba378825108020ce6efcd6f5060aa5d17f79b57e0eaefd359d9781c878c1\" pid:5262 exited_at:{seconds:1748314448 nanos:843870819}" May 27 02:54:09.635082 kubelet[2649]: E0527 02:54:09.635030 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-nqjnl" podUID="88e5918f-832b-4950-9cef-4b797dda53c9" May 27 02:54:10.708070 systemd[1]: Started sshd@17-10.0.0.73:22-10.0.0.1:53472.service - OpenSSH per-connection server daemon (10.0.0.1:53472). May 27 02:54:10.776733 sshd[5278]: Accepted publickey for core from 10.0.0.1 port 53472 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:54:10.778233 sshd-session[5278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:54:10.783400 systemd-logind[1506]: New session 18 of user core. May 27 02:54:10.789492 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 02:54:10.929116 sshd[5280]: Connection closed by 10.0.0.1 port 53472 May 27 02:54:10.929431 sshd-session[5278]: pam_unix(sshd:session): session closed for user core May 27 02:54:10.932687 systemd[1]: sshd@17-10.0.0.73:22-10.0.0.1:53472.service: Deactivated successfully. May 27 02:54:10.934585 systemd[1]: session-18.scope: Deactivated successfully. May 27 02:54:10.935152 systemd-logind[1506]: Session 18 logged out. Waiting for processes to exit. May 27 02:54:10.937046 systemd-logind[1506]: Removed session 18. May 27 02:54:15.945062 systemd[1]: Started sshd@18-10.0.0.73:22-10.0.0.1:40184.service - OpenSSH per-connection server daemon (10.0.0.1:40184). May 27 02:54:16.000164 sshd[5298]: Accepted publickey for core from 10.0.0.1 port 40184 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:54:16.001464 sshd-session[5298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:54:16.005453 systemd-logind[1506]: New session 19 of user core. May 27 02:54:16.015451 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 02:54:16.197418 sshd[5300]: Connection closed by 10.0.0.1 port 40184 May 27 02:54:16.198416 sshd-session[5298]: pam_unix(sshd:session): session closed for user core May 27 02:54:16.202007 systemd[1]: sshd@18-10.0.0.73:22-10.0.0.1:40184.service: Deactivated successfully. May 27 02:54:16.203719 systemd[1]: session-19.scope: Deactivated successfully. May 27 02:54:16.205027 systemd-logind[1506]: Session 19 logged out. Waiting for processes to exit. May 27 02:54:16.207011 systemd-logind[1506]: Removed session 19. May 27 02:54:19.087696 containerd[1526]: time="2025-05-27T02:54:19.087649598Z" level=info msg="TaskExit event in podsandbox handler container_id:\"55e91eb4f3b31b9e8d757ec63c46cb2c527e814d8224d39e073ad6bfb25dd1cd\" id:\"c590b32bb96fee9fc9f45da017e2e16ba72a00ca634e406df5a335ef54336ad2\" pid:5332 exited_at:{seconds:1748314459 nanos:87431360}" May 27 02:54:21.212522 systemd[1]: Started sshd@19-10.0.0.73:22-10.0.0.1:40190.service - OpenSSH per-connection server daemon (10.0.0.1:40190). May 27 02:54:21.293587 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 40190 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:54:21.296287 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:54:21.300326 systemd-logind[1506]: New session 20 of user core. May 27 02:54:21.308473 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 02:54:21.493758 sshd[5345]: Connection closed by 10.0.0.1 port 40190 May 27 02:54:21.494342 sshd-session[5343]: pam_unix(sshd:session): session closed for user core May 27 02:54:21.499569 systemd[1]: sshd@19-10.0.0.73:22-10.0.0.1:40190.service: Deactivated successfully. May 27 02:54:21.503250 systemd[1]: session-20.scope: Deactivated successfully. May 27 02:54:21.504081 systemd-logind[1506]: Session 20 logged out. Waiting for processes to exit. May 27 02:54:21.505283 systemd-logind[1506]: Removed session 20. May 27 02:54:21.636403 containerd[1526]: time="2025-05-27T02:54:21.636364072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\"" May 27 02:54:21.805706 containerd[1526]: time="2025-05-27T02:54:21.805584432Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:54:21.806854 containerd[1526]: time="2025-05-27T02:54:21.806751625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:54:21.806944 containerd[1526]: time="2025-05-27T02:54:21.806819665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.0: active requests=0, bytes read=86" May 27 02:54:21.807130 kubelet[2649]: E0527 02:54:21.807077 2649 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:54:21.807485 kubelet[2649]: E0527 02:54:21.807137 2649 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker:v3.30.0" May 27 02:54:21.807485 kubelet[2649]: E0527 02:54:21.807253 2649 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.0,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:8d00a686b45b4d5f8d5d971f47327b2d,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mk7cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d6dbd985-lqqm7_calico-system(f1690d54-3bd7-4103-a321-b2466ca801ef): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:54:21.810279 containerd[1526]: time="2025-05-27T02:54:21.810144727Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\"" May 27 02:54:21.971265 containerd[1526]: time="2025-05-27T02:54:21.971205851Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:54:21.973752 containerd[1526]: time="2025-05-27T02:54:21.973704197Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:54:21.973889 containerd[1526]: time="2025-05-27T02:54:21.973784877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.0: active requests=0, bytes read=86" May 27 02:54:21.973986 kubelet[2649]: E0527 02:54:21.973928 2649 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:54:21.974075 kubelet[2649]: E0527 02:54:21.973999 2649 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.0" May 27 02:54:21.974514 kubelet[2649]: E0527 02:54:21.974119 2649 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mk7cw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-78d6dbd985-lqqm7_calico-system(f1690d54-3bd7-4103-a321-b2466ca801ef): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:54:21.975691 kubelet[2649]: E0527 02:54:21.975654 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker%3Apull&service=ghcr.io: 403 Forbidden\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fwhisker-backend%3Apull&service=ghcr.io: 403 Forbidden\"]" pod="calico-system/whisker-78d6dbd985-lqqm7" podUID="f1690d54-3bd7-4103-a321-b2466ca801ef" May 27 02:54:24.637502 containerd[1526]: time="2025-05-27T02:54:24.637259894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\"" May 27 02:54:24.802324 containerd[1526]: time="2025-05-27T02:54:24.802254145Z" level=info msg="fetch failed" error="failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" host=ghcr.io May 27 02:54:24.803322 containerd[1526]: time="2025-05-27T02:54:24.803188623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" May 27 02:54:24.803322 containerd[1526]: time="2025-05-27T02:54:24.803283143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.0: active requests=0, bytes read=86" May 27 02:54:24.803512 kubelet[2649]: E0527 02:54:24.803434 2649 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:54:24.803512 kubelet[2649]: E0527 02:54:24.803492 2649 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" image="ghcr.io/flatcar/calico/goldmane:v3.30.0" May 27 02:54:24.804250 kubelet[2649]: E0527 02:54:24.803617 2649 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.0,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xth2m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-78d55f7ddc-nqjnl_calico-system(88e5918f-832b-4950-9cef-4b797dda53c9): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.0\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden" logger="UnhandledError" May 27 02:54:24.804834 kubelet[2649]: E0527 02:54:24.804787 2649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.0\\\": failed to authorize: failed to fetch anonymous token: unexpected status from GET request to https://ghcr.io/token?scope=repository%3Aflatcar%2Fcalico%2Fgoldmane%3Apull&service=ghcr.io: 403 Forbidden\"" pod="calico-system/goldmane-78d55f7ddc-nqjnl" podUID="88e5918f-832b-4950-9cef-4b797dda53c9" May 27 02:54:26.511870 systemd[1]: Started sshd@20-10.0.0.73:22-10.0.0.1:35008.service - OpenSSH per-connection server daemon (10.0.0.1:35008). May 27 02:54:26.575283 sshd[5358]: Accepted publickey for core from 10.0.0.1 port 35008 ssh2: RSA SHA256:+Ok2qUkoQikU0DO7rksFgy8mCIIB6/JUg3lsMDZPwmg May 27 02:54:26.578248 sshd-session[5358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:54:26.587230 systemd-logind[1506]: New session 21 of user core. May 27 02:54:26.592672 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 02:54:26.738598 sshd[5360]: Connection closed by 10.0.0.1 port 35008 May 27 02:54:26.738506 sshd-session[5358]: pam_unix(sshd:session): session closed for user core May 27 02:54:26.743639 systemd[1]: sshd@20-10.0.0.73:22-10.0.0.1:35008.service: Deactivated successfully. May 27 02:54:26.745902 systemd[1]: session-21.scope: Deactivated successfully. May 27 02:54:26.747894 systemd-logind[1506]: Session 21 logged out. Waiting for processes to exit. May 27 02:54:26.748800 systemd-logind[1506]: Removed session 21.