Aug 12 23:46:18.850848 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 12 23:46:18.850870 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Aug 12 21:51:24 -00 2025 Aug 12 23:46:18.850880 kernel: KASLR enabled Aug 12 23:46:18.850886 kernel: efi: EFI v2.7 by EDK II Aug 12 23:46:18.850891 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Aug 12 23:46:18.850896 kernel: random: crng init done Aug 12 23:46:18.850903 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Aug 12 23:46:18.850909 kernel: secureboot: Secure boot enabled Aug 12 23:46:18.850914 kernel: ACPI: Early table checksum verification disabled Aug 12 23:46:18.850921 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Aug 12 23:46:18.850927 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 12 23:46:18.850933 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:46:18.850938 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:46:18.850944 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:46:18.850951 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:46:18.850959 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:46:18.850965 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:46:18.850971 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:46:18.850977 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:46:18.850983 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 12 23:46:18.850989 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 12 23:46:18.850995 kernel: ACPI: Use ACPI SPCR as default console: Yes Aug 12 23:46:18.851001 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:46:18.851007 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Aug 12 23:46:18.851013 kernel: Zone ranges: Aug 12 23:46:18.851020 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:46:18.851026 kernel: DMA32 empty Aug 12 23:46:18.851032 kernel: Normal empty Aug 12 23:46:18.851037 kernel: Device empty Aug 12 23:46:18.851043 kernel: Movable zone start for each node Aug 12 23:46:18.851049 kernel: Early memory node ranges Aug 12 23:46:18.851055 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Aug 12 23:46:18.851061 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Aug 12 23:46:18.851067 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Aug 12 23:46:18.851073 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Aug 12 23:46:18.851079 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Aug 12 23:46:18.851085 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Aug 12 23:46:18.851093 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Aug 12 23:46:18.851099 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Aug 12 23:46:18.851105 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 12 23:46:18.851113 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 12 23:46:18.851119 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 12 23:46:18.851125 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Aug 12 23:46:18.851132 kernel: psci: probing for conduit method from ACPI. Aug 12 23:46:18.851140 kernel: psci: PSCIv1.1 detected in firmware. Aug 12 23:46:18.851146 kernel: psci: Using standard PSCI v0.2 function IDs Aug 12 23:46:18.851152 kernel: psci: Trusted OS migration not required Aug 12 23:46:18.851159 kernel: psci: SMC Calling Convention v1.1 Aug 12 23:46:18.851165 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 12 23:46:18.851171 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Aug 12 23:46:18.851178 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Aug 12 23:46:18.851200 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 12 23:46:18.851207 kernel: Detected PIPT I-cache on CPU0 Aug 12 23:46:18.851230 kernel: CPU features: detected: GIC system register CPU interface Aug 12 23:46:18.851236 kernel: CPU features: detected: Spectre-v4 Aug 12 23:46:18.851242 kernel: CPU features: detected: Spectre-BHB Aug 12 23:46:18.851249 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 12 23:46:18.851255 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 12 23:46:18.851262 kernel: CPU features: detected: ARM erratum 1418040 Aug 12 23:46:18.851268 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 12 23:46:18.851274 kernel: alternatives: applying boot alternatives Aug 12 23:46:18.851282 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ce82f1ef836ba8581e59ce9db4eef4240d287b2b5f9937c28f0cd024f4dc9107 Aug 12 23:46:18.851288 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 12 23:46:18.851295 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 12 23:46:18.851303 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 12 23:46:18.851310 kernel: Fallback order for Node 0: 0 Aug 12 23:46:18.851316 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Aug 12 23:46:18.851322 kernel: Policy zone: DMA Aug 12 23:46:18.851328 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 12 23:46:18.851335 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Aug 12 23:46:18.851341 kernel: software IO TLB: area num 4. Aug 12 23:46:18.851347 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Aug 12 23:46:18.851354 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Aug 12 23:46:18.851360 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 12 23:46:18.851366 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 12 23:46:18.851373 kernel: rcu: RCU event tracing is enabled. Aug 12 23:46:18.851381 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 12 23:46:18.851388 kernel: Trampoline variant of Tasks RCU enabled. Aug 12 23:46:18.851394 kernel: Tracing variant of Tasks RCU enabled. Aug 12 23:46:18.851400 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 12 23:46:18.851407 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 12 23:46:18.851413 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:46:18.851420 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 12 23:46:18.851426 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 12 23:46:18.851433 kernel: GICv3: 256 SPIs implemented Aug 12 23:46:18.851439 kernel: GICv3: 0 Extended SPIs implemented Aug 12 23:46:18.851445 kernel: Root IRQ handler: gic_handle_irq Aug 12 23:46:18.851453 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 12 23:46:18.851459 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Aug 12 23:46:18.851466 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 12 23:46:18.851472 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 12 23:46:18.851478 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Aug 12 23:46:18.851485 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Aug 12 23:46:18.851491 kernel: GICv3: using LPI property table @0x0000000040130000 Aug 12 23:46:18.851497 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Aug 12 23:46:18.851504 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 12 23:46:18.851510 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:46:18.851516 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 12 23:46:18.851523 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 12 23:46:18.851531 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 12 23:46:18.851537 kernel: arm-pv: using stolen time PV Aug 12 23:46:18.851544 kernel: Console: colour dummy device 80x25 Aug 12 23:46:18.851550 kernel: ACPI: Core revision 20240827 Aug 12 23:46:18.851557 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 12 23:46:18.851563 kernel: pid_max: default: 32768 minimum: 301 Aug 12 23:46:18.851570 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 12 23:46:18.851576 kernel: landlock: Up and running. Aug 12 23:46:18.851583 kernel: SELinux: Initializing. Aug 12 23:46:18.851590 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:46:18.851597 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 12 23:46:18.851604 kernel: rcu: Hierarchical SRCU implementation. Aug 12 23:46:18.851610 kernel: rcu: Max phase no-delay instances is 400. Aug 12 23:46:18.851617 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 12 23:46:18.851623 kernel: Remapping and enabling EFI services. Aug 12 23:46:18.851630 kernel: smp: Bringing up secondary CPUs ... Aug 12 23:46:18.851636 kernel: Detected PIPT I-cache on CPU1 Aug 12 23:46:18.851652 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 12 23:46:18.851661 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Aug 12 23:46:18.851672 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:46:18.851679 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 12 23:46:18.851687 kernel: Detected PIPT I-cache on CPU2 Aug 12 23:46:18.851693 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 12 23:46:18.851700 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Aug 12 23:46:18.851707 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:46:18.851714 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 12 23:46:18.851721 kernel: Detected PIPT I-cache on CPU3 Aug 12 23:46:18.851729 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 12 23:46:18.851736 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Aug 12 23:46:18.851743 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 12 23:46:18.851750 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 12 23:46:18.851756 kernel: smp: Brought up 1 node, 4 CPUs Aug 12 23:46:18.851763 kernel: SMP: Total of 4 processors activated. Aug 12 23:46:18.851770 kernel: CPU: All CPU(s) started at EL1 Aug 12 23:46:18.851777 kernel: CPU features: detected: 32-bit EL0 Support Aug 12 23:46:18.851783 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 12 23:46:18.851792 kernel: CPU features: detected: Common not Private translations Aug 12 23:46:18.851799 kernel: CPU features: detected: CRC32 instructions Aug 12 23:46:18.851805 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 12 23:46:18.851812 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 12 23:46:18.851819 kernel: CPU features: detected: LSE atomic instructions Aug 12 23:46:18.851826 kernel: CPU features: detected: Privileged Access Never Aug 12 23:46:18.851832 kernel: CPU features: detected: RAS Extension Support Aug 12 23:46:18.851839 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 12 23:46:18.851846 kernel: alternatives: applying system-wide alternatives Aug 12 23:46:18.851854 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Aug 12 23:46:18.851861 kernel: Memory: 2421860K/2572288K available (11136K kernel code, 2436K rwdata, 9080K rodata, 39488K init, 1038K bss, 128092K reserved, 16384K cma-reserved) Aug 12 23:46:18.851868 kernel: devtmpfs: initialized Aug 12 23:46:18.851875 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 12 23:46:18.851882 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 12 23:46:18.851889 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 12 23:46:18.851896 kernel: 0 pages in range for non-PLT usage Aug 12 23:46:18.851903 kernel: 508432 pages in range for PLT usage Aug 12 23:46:18.851909 kernel: pinctrl core: initialized pinctrl subsystem Aug 12 23:46:18.851917 kernel: SMBIOS 3.0.0 present. Aug 12 23:46:18.851924 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Aug 12 23:46:18.851931 kernel: DMI: Memory slots populated: 1/1 Aug 12 23:46:18.851938 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 12 23:46:18.851944 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 12 23:46:18.851951 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 12 23:46:18.851958 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 12 23:46:18.851965 kernel: audit: initializing netlink subsys (disabled) Aug 12 23:46:18.851972 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Aug 12 23:46:18.851980 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 12 23:46:18.851987 kernel: cpuidle: using governor menu Aug 12 23:46:18.851994 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 12 23:46:18.852001 kernel: ASID allocator initialised with 32768 entries Aug 12 23:46:18.852008 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 12 23:46:18.852014 kernel: Serial: AMBA PL011 UART driver Aug 12 23:46:18.852021 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 12 23:46:18.852028 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 12 23:46:18.852035 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 12 23:46:18.852043 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 12 23:46:18.852050 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 12 23:46:18.852056 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 12 23:46:18.852063 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 12 23:46:18.852070 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 12 23:46:18.852076 kernel: ACPI: Added _OSI(Module Device) Aug 12 23:46:18.852083 kernel: ACPI: Added _OSI(Processor Device) Aug 12 23:46:18.852090 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 12 23:46:18.852097 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 12 23:46:18.852105 kernel: ACPI: Interpreter enabled Aug 12 23:46:18.852111 kernel: ACPI: Using GIC for interrupt routing Aug 12 23:46:18.852118 kernel: ACPI: MCFG table detected, 1 entries Aug 12 23:46:18.852125 kernel: ACPI: CPU0 has been hot-added Aug 12 23:46:18.852132 kernel: ACPI: CPU1 has been hot-added Aug 12 23:46:18.852138 kernel: ACPI: CPU2 has been hot-added Aug 12 23:46:18.852145 kernel: ACPI: CPU3 has been hot-added Aug 12 23:46:18.852152 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 12 23:46:18.852159 kernel: printk: legacy console [ttyAMA0] enabled Aug 12 23:46:18.852167 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 12 23:46:18.852326 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 12 23:46:18.852393 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 12 23:46:18.852453 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 12 23:46:18.852511 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 12 23:46:18.852566 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 12 23:46:18.852575 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 12 23:46:18.852585 kernel: PCI host bridge to bus 0000:00 Aug 12 23:46:18.852659 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 12 23:46:18.852715 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 12 23:46:18.852767 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 12 23:46:18.852818 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 12 23:46:18.852895 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Aug 12 23:46:18.852965 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 12 23:46:18.853028 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Aug 12 23:46:18.853088 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Aug 12 23:46:18.853147 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Aug 12 23:46:18.853233 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Aug 12 23:46:18.853296 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Aug 12 23:46:18.853355 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Aug 12 23:46:18.853408 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 12 23:46:18.853463 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 12 23:46:18.853515 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 12 23:46:18.853524 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 12 23:46:18.853531 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 12 23:46:18.853538 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 12 23:46:18.853544 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 12 23:46:18.853551 kernel: iommu: Default domain type: Translated Aug 12 23:46:18.853558 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 12 23:46:18.853566 kernel: efivars: Registered efivars operations Aug 12 23:46:18.853573 kernel: vgaarb: loaded Aug 12 23:46:18.853580 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 12 23:46:18.853587 kernel: VFS: Disk quotas dquot_6.6.0 Aug 12 23:46:18.853594 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 12 23:46:18.853601 kernel: pnp: PnP ACPI init Aug 12 23:46:18.853675 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 12 23:46:18.853686 kernel: pnp: PnP ACPI: found 1 devices Aug 12 23:46:18.853695 kernel: NET: Registered PF_INET protocol family Aug 12 23:46:18.853702 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 12 23:46:18.853709 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 12 23:46:18.853716 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 12 23:46:18.853723 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 12 23:46:18.853730 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 12 23:46:18.853737 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 12 23:46:18.853744 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:46:18.853751 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 12 23:46:18.853759 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 12 23:46:18.853766 kernel: PCI: CLS 0 bytes, default 64 Aug 12 23:46:18.853773 kernel: kvm [1]: HYP mode not available Aug 12 23:46:18.853780 kernel: Initialise system trusted keyrings Aug 12 23:46:18.853787 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 12 23:46:18.853794 kernel: Key type asymmetric registered Aug 12 23:46:18.853800 kernel: Asymmetric key parser 'x509' registered Aug 12 23:46:18.853807 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 12 23:46:18.853814 kernel: io scheduler mq-deadline registered Aug 12 23:46:18.853822 kernel: io scheduler kyber registered Aug 12 23:46:18.853829 kernel: io scheduler bfq registered Aug 12 23:46:18.853836 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 12 23:46:18.853843 kernel: ACPI: button: Power Button [PWRB] Aug 12 23:46:18.853850 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 12 23:46:18.853909 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 12 23:46:18.853919 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 12 23:46:18.853925 kernel: thunder_xcv, ver 1.0 Aug 12 23:46:18.853932 kernel: thunder_bgx, ver 1.0 Aug 12 23:46:18.853941 kernel: nicpf, ver 1.0 Aug 12 23:46:18.853947 kernel: nicvf, ver 1.0 Aug 12 23:46:18.854019 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 12 23:46:18.854075 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-12T23:46:18 UTC (1755042378) Aug 12 23:46:18.854084 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 12 23:46:18.854091 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Aug 12 23:46:18.854098 kernel: watchdog: NMI not fully supported Aug 12 23:46:18.854105 kernel: watchdog: Hard watchdog permanently disabled Aug 12 23:46:18.854113 kernel: NET: Registered PF_INET6 protocol family Aug 12 23:46:18.854120 kernel: Segment Routing with IPv6 Aug 12 23:46:18.854127 kernel: In-situ OAM (IOAM) with IPv6 Aug 12 23:46:18.854134 kernel: NET: Registered PF_PACKET protocol family Aug 12 23:46:18.854141 kernel: Key type dns_resolver registered Aug 12 23:46:18.854148 kernel: registered taskstats version 1 Aug 12 23:46:18.854154 kernel: Loading compiled-in X.509 certificates Aug 12 23:46:18.854161 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: e74bfacfa68399ed7282bf533dd5901fdb84b882' Aug 12 23:46:18.854168 kernel: Demotion targets for Node 0: null Aug 12 23:46:18.854177 kernel: Key type .fscrypt registered Aug 12 23:46:18.854198 kernel: Key type fscrypt-provisioning registered Aug 12 23:46:18.854205 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 12 23:46:18.854212 kernel: ima: Allocated hash algorithm: sha1 Aug 12 23:46:18.854219 kernel: ima: No architecture policies found Aug 12 23:46:18.854225 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 12 23:46:18.854232 kernel: clk: Disabling unused clocks Aug 12 23:46:18.854239 kernel: PM: genpd: Disabling unused power domains Aug 12 23:46:18.854246 kernel: Warning: unable to open an initial console. Aug 12 23:46:18.854255 kernel: Freeing unused kernel memory: 39488K Aug 12 23:46:18.854262 kernel: Run /init as init process Aug 12 23:46:18.854268 kernel: with arguments: Aug 12 23:46:18.854275 kernel: /init Aug 12 23:46:18.854282 kernel: with environment: Aug 12 23:46:18.854288 kernel: HOME=/ Aug 12 23:46:18.854295 kernel: TERM=linux Aug 12 23:46:18.854302 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 12 23:46:18.854309 systemd[1]: Successfully made /usr/ read-only. Aug 12 23:46:18.854321 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:46:18.854328 systemd[1]: Detected virtualization kvm. Aug 12 23:46:18.854336 systemd[1]: Detected architecture arm64. Aug 12 23:46:18.854342 systemd[1]: Running in initrd. Aug 12 23:46:18.854349 systemd[1]: No hostname configured, using default hostname. Aug 12 23:46:18.854357 systemd[1]: Hostname set to . Aug 12 23:46:18.854364 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:46:18.854373 systemd[1]: Queued start job for default target initrd.target. Aug 12 23:46:18.854380 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:46:18.854387 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:46:18.854395 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 12 23:46:18.854403 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:46:18.854410 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 12 23:46:18.854418 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 12 23:46:18.854428 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 12 23:46:18.854435 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 12 23:46:18.854442 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:46:18.854450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:46:18.854457 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:46:18.854464 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:46:18.854471 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:46:18.854479 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:46:18.854487 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:46:18.854495 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:46:18.854502 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 12 23:46:18.854510 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 12 23:46:18.854517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:46:18.854524 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:46:18.854532 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:46:18.854539 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:46:18.854546 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 12 23:46:18.854555 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:46:18.854562 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 12 23:46:18.854570 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 12 23:46:18.854577 systemd[1]: Starting systemd-fsck-usr.service... Aug 12 23:46:18.854585 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:46:18.854592 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:46:18.854599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:46:18.854606 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 12 23:46:18.854616 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:46:18.854623 systemd[1]: Finished systemd-fsck-usr.service. Aug 12 23:46:18.854631 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 12 23:46:18.854661 systemd-journald[245]: Collecting audit messages is disabled. Aug 12 23:46:18.854682 systemd-journald[245]: Journal started Aug 12 23:46:18.854701 systemd-journald[245]: Runtime Journal (/run/log/journal/85d0fa39897f49ec9775ead503618bea) is 6M, max 48.5M, 42.4M free. Aug 12 23:46:18.859270 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 12 23:46:18.847427 systemd-modules-load[246]: Inserted module 'overlay' Aug 12 23:46:18.860785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:46:18.861959 systemd-modules-load[246]: Inserted module 'br_netfilter' Aug 12 23:46:18.863164 kernel: Bridge firewalling registered Aug 12 23:46:18.863181 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:46:18.865231 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:46:18.866174 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 12 23:46:18.869845 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 12 23:46:18.871274 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:46:18.872686 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:46:18.879327 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:46:18.883905 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:46:18.886753 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:46:18.887332 systemd-tmpfiles[273]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 12 23:46:18.890244 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:46:18.893538 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:46:18.894453 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:46:18.896569 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 12 23:46:18.927195 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ce82f1ef836ba8581e59ce9db4eef4240d287b2b5f9937c28f0cd024f4dc9107 Aug 12 23:46:18.940030 systemd-resolved[288]: Positive Trust Anchors: Aug 12 23:46:18.940041 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:46:18.940072 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:46:18.946320 systemd-resolved[288]: Defaulting to hostname 'linux'. Aug 12 23:46:18.947427 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:46:18.948446 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:46:19.002221 kernel: SCSI subsystem initialized Aug 12 23:46:19.011206 kernel: Loading iSCSI transport class v2.0-870. Aug 12 23:46:19.020213 kernel: iscsi: registered transport (tcp) Aug 12 23:46:19.032453 kernel: iscsi: registered transport (qla4xxx) Aug 12 23:46:19.032469 kernel: QLogic iSCSI HBA Driver Aug 12 23:46:19.048892 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:46:19.063230 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:46:19.064433 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:46:19.109761 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 12 23:46:19.111807 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 12 23:46:19.172234 kernel: raid6: neonx8 gen() 15795 MB/s Aug 12 23:46:19.189212 kernel: raid6: neonx4 gen() 15761 MB/s Aug 12 23:46:19.206204 kernel: raid6: neonx2 gen() 13176 MB/s Aug 12 23:46:19.223225 kernel: raid6: neonx1 gen() 10431 MB/s Aug 12 23:46:19.240212 kernel: raid6: int64x8 gen() 6868 MB/s Aug 12 23:46:19.257215 kernel: raid6: int64x4 gen() 7334 MB/s Aug 12 23:46:19.274207 kernel: raid6: int64x2 gen() 6092 MB/s Aug 12 23:46:19.291200 kernel: raid6: int64x1 gen() 5044 MB/s Aug 12 23:46:19.291213 kernel: raid6: using algorithm neonx8 gen() 15795 MB/s Aug 12 23:46:19.308210 kernel: raid6: .... xor() 12053 MB/s, rmw enabled Aug 12 23:46:19.308232 kernel: raid6: using neon recovery algorithm Aug 12 23:46:19.313249 kernel: xor: measuring software checksum speed Aug 12 23:46:19.313278 kernel: 8regs : 21624 MB/sec Aug 12 23:46:19.314263 kernel: 32regs : 21676 MB/sec Aug 12 23:46:19.314293 kernel: arm64_neon : 28089 MB/sec Aug 12 23:46:19.314310 kernel: xor: using function: arm64_neon (28089 MB/sec) Aug 12 23:46:19.375219 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 12 23:46:19.381707 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:46:19.383975 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:46:19.412002 systemd-udevd[498]: Using default interface naming scheme 'v255'. Aug 12 23:46:19.416063 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:46:19.417680 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 12 23:46:19.444143 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Aug 12 23:46:19.465205 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:46:19.468300 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:46:19.520218 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:46:19.522573 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 12 23:46:19.558908 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 12 23:46:19.559064 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 12 23:46:19.566413 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 12 23:46:19.566461 kernel: GPT:9289727 != 19775487 Aug 12 23:46:19.566472 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 12 23:46:19.567211 kernel: GPT:9289727 != 19775487 Aug 12 23:46:19.568248 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 12 23:46:19.568283 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:46:19.570793 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:46:19.570908 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:46:19.573549 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:46:19.575389 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:46:19.603851 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 12 23:46:19.605077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:46:19.612247 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 12 23:46:19.619550 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 12 23:46:19.627234 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 12 23:46:19.628232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 12 23:46:19.637015 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:46:19.637959 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:46:19.639450 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:46:19.640990 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:46:19.643167 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 12 23:46:19.644672 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 12 23:46:19.659250 disk-uuid[591]: Primary Header is updated. Aug 12 23:46:19.659250 disk-uuid[591]: Secondary Entries is updated. Aug 12 23:46:19.659250 disk-uuid[591]: Secondary Header is updated. Aug 12 23:46:19.663042 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:46:19.665927 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:46:20.673211 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 12 23:46:20.674163 disk-uuid[596]: The operation has completed successfully. Aug 12 23:46:20.699772 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 12 23:46:20.699870 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 12 23:46:20.725834 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 12 23:46:20.746168 sh[610]: Success Aug 12 23:46:20.761286 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 12 23:46:20.761330 kernel: device-mapper: uevent: version 1.0.3 Aug 12 23:46:20.762618 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 12 23:46:20.774221 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Aug 12 23:46:20.801704 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 12 23:46:20.803923 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 12 23:46:20.823947 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 12 23:46:20.835098 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 12 23:46:20.835132 kernel: BTRFS: device fsid 7658cdd8-2ee4-4f84-82be-1f808605c89c devid 1 transid 42 /dev/mapper/usr (253:0) scanned by mount (622) Aug 12 23:46:20.837673 kernel: BTRFS info (device dm-0): first mount of filesystem 7658cdd8-2ee4-4f84-82be-1f808605c89c Aug 12 23:46:20.837691 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:46:20.837701 kernel: BTRFS info (device dm-0): using free-space-tree Aug 12 23:46:20.841127 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 12 23:46:20.842324 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 12 23:46:20.843290 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 12 23:46:20.844119 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 12 23:46:20.846705 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 12 23:46:20.867917 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (652) Aug 12 23:46:20.867974 kernel: BTRFS info (device vda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:46:20.867991 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:46:20.868648 kernel: BTRFS info (device vda6): using free-space-tree Aug 12 23:46:20.875216 kernel: BTRFS info (device vda6): last unmount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:46:20.876393 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 12 23:46:20.879071 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 12 23:46:20.960125 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:46:20.962959 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:46:21.013340 systemd-networkd[806]: lo: Link UP Aug 12 23:46:21.013353 systemd-networkd[806]: lo: Gained carrier Aug 12 23:46:21.014017 systemd-networkd[806]: Enumeration completed Aug 12 23:46:21.014099 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:46:21.014781 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:46:21.014785 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:46:21.014996 systemd[1]: Reached target network.target - Network. Aug 12 23:46:21.015649 systemd-networkd[806]: eth0: Link UP Aug 12 23:46:21.015745 systemd-networkd[806]: eth0: Gained carrier Aug 12 23:46:21.015753 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:46:21.025276 ignition[694]: Ignition 2.21.0 Aug 12 23:46:21.025288 ignition[694]: Stage: fetch-offline Aug 12 23:46:21.025327 ignition[694]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:46:21.025335 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:46:21.025522 ignition[694]: parsed url from cmdline: "" Aug 12 23:46:21.025525 ignition[694]: no config URL provided Aug 12 23:46:21.025529 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Aug 12 23:46:21.025535 ignition[694]: no config at "/usr/lib/ignition/user.ign" Aug 12 23:46:21.025556 ignition[694]: op(1): [started] loading QEMU firmware config module Aug 12 23:46:21.030227 systemd-networkd[806]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:46:21.025560 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 12 23:46:21.039922 ignition[694]: op(1): [finished] loading QEMU firmware config module Aug 12 23:46:21.079451 ignition[694]: parsing config with SHA512: a7470946bf89800e04d9d2d0f181ab6260bf83fc28e1c8504abd439fff4ebfa4f639dc3bb96a298e79945385066189b67baf77d80ebe493580e4df701c88362e Aug 12 23:46:21.086457 unknown[694]: fetched base config from "system" Aug 12 23:46:21.086471 unknown[694]: fetched user config from "qemu" Aug 12 23:46:21.087098 ignition[694]: fetch-offline: fetch-offline passed Aug 12 23:46:21.087160 ignition[694]: Ignition finished successfully Aug 12 23:46:21.089229 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:46:21.090320 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 12 23:46:21.091123 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 12 23:46:21.121324 ignition[815]: Ignition 2.21.0 Aug 12 23:46:21.121338 ignition[815]: Stage: kargs Aug 12 23:46:21.121490 ignition[815]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:46:21.121499 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:46:21.122333 ignition[815]: kargs: kargs passed Aug 12 23:46:21.122379 ignition[815]: Ignition finished successfully Aug 12 23:46:21.126329 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 12 23:46:21.128643 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 12 23:46:21.165388 ignition[823]: Ignition 2.21.0 Aug 12 23:46:21.165400 ignition[823]: Stage: disks Aug 12 23:46:21.165739 ignition[823]: no configs at "/usr/lib/ignition/base.d" Aug 12 23:46:21.165752 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:46:21.168954 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 12 23:46:21.166950 ignition[823]: disks: disks passed Aug 12 23:46:21.169968 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 12 23:46:21.167002 ignition[823]: Ignition finished successfully Aug 12 23:46:21.171146 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 12 23:46:21.172454 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:46:21.173792 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:46:21.174918 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:46:21.177254 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 12 23:46:21.207116 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 12 23:46:21.211466 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 12 23:46:21.214630 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 12 23:46:21.292208 kernel: EXT4-fs (vda9): mounted filesystem d634334e-91a3-4b77-89ab-775bdd78a572 r/w with ordered data mode. Quota mode: none. Aug 12 23:46:21.293029 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 12 23:46:21.294147 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 12 23:46:21.296130 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:46:21.297689 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 12 23:46:21.298522 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 12 23:46:21.298559 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 12 23:46:21.298582 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:46:21.310688 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 12 23:46:21.313123 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 12 23:46:21.315561 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (842) Aug 12 23:46:21.317202 kernel: BTRFS info (device vda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:46:21.317230 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:46:21.317241 kernel: BTRFS info (device vda6): using free-space-tree Aug 12 23:46:21.319760 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:46:21.372440 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory Aug 12 23:46:21.376886 initrd-setup-root[873]: cut: /sysroot/etc/group: No such file or directory Aug 12 23:46:21.380719 initrd-setup-root[880]: cut: /sysroot/etc/shadow: No such file or directory Aug 12 23:46:21.385207 initrd-setup-root[887]: cut: /sysroot/etc/gshadow: No such file or directory Aug 12 23:46:21.476020 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 12 23:46:21.477874 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 12 23:46:21.479202 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 12 23:46:21.495208 kernel: BTRFS info (device vda6): last unmount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:46:21.511482 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 12 23:46:21.515819 ignition[956]: INFO : Ignition 2.21.0 Aug 12 23:46:21.515819 ignition[956]: INFO : Stage: mount Aug 12 23:46:21.517725 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:46:21.517725 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:46:21.519228 ignition[956]: INFO : mount: mount passed Aug 12 23:46:21.519228 ignition[956]: INFO : Ignition finished successfully Aug 12 23:46:21.520700 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 12 23:46:21.522282 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 12 23:46:21.841928 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 12 23:46:21.843448 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 12 23:46:21.859534 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (968) Aug 12 23:46:21.859566 kernel: BTRFS info (device vda6): first mount of filesystem cff59a55-3bd9-4c36-9f7f-aabedbf210fb Aug 12 23:46:21.859577 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 12 23:46:21.860241 kernel: BTRFS info (device vda6): using free-space-tree Aug 12 23:46:21.863352 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 12 23:46:21.896672 ignition[985]: INFO : Ignition 2.21.0 Aug 12 23:46:21.896672 ignition[985]: INFO : Stage: files Aug 12 23:46:21.898821 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:46:21.898821 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:46:21.898821 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Aug 12 23:46:21.901121 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 12 23:46:21.901121 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 12 23:46:21.901121 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 12 23:46:21.901121 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 12 23:46:21.905107 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 12 23:46:21.905107 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 12 23:46:21.905107 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 12 23:46:21.901203 unknown[985]: wrote ssh authorized keys file for user: core Aug 12 23:46:22.089755 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 12 23:46:22.469730 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 12 23:46:22.471259 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 12 23:46:22.471259 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 12 23:46:22.471259 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:46:22.471259 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 12 23:46:22.471259 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:46:22.471259 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 12 23:46:22.471259 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:46:22.471259 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 12 23:46:22.480982 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:46:22.480982 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 12 23:46:22.480982 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:46:22.480982 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:46:22.480982 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:46:22.480982 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 12 23:46:22.933735 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 12 23:46:23.038330 systemd-networkd[806]: eth0: Gained IPv6LL Aug 12 23:46:23.453074 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 12 23:46:23.453074 ignition[985]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 12 23:46:23.456078 ignition[985]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:46:23.456078 ignition[985]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 12 23:46:23.456078 ignition[985]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 12 23:46:23.456078 ignition[985]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 12 23:46:23.456078 ignition[985]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:46:23.462456 ignition[985]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 12 23:46:23.462456 ignition[985]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 12 23:46:23.462456 ignition[985]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Aug 12 23:46:23.477950 ignition[985]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:46:23.480760 ignition[985]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 12 23:46:23.481900 ignition[985]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Aug 12 23:46:23.481900 ignition[985]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Aug 12 23:46:23.481900 ignition[985]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Aug 12 23:46:23.481900 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:46:23.481900 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 12 23:46:23.481900 ignition[985]: INFO : files: files passed Aug 12 23:46:23.481900 ignition[985]: INFO : Ignition finished successfully Aug 12 23:46:23.483583 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 12 23:46:23.487245 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 12 23:46:23.497494 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 12 23:46:23.500424 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 12 23:46:23.501213 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 12 23:46:23.504787 initrd-setup-root-after-ignition[1015]: grep: /sysroot/oem/oem-release: No such file or directory Aug 12 23:46:23.507171 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:46:23.507171 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:46:23.509775 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 12 23:46:23.510457 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:46:23.511898 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 12 23:46:23.513953 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 12 23:46:23.568076 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 12 23:46:23.568218 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 12 23:46:23.569870 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 12 23:46:23.571102 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 12 23:46:23.572579 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 12 23:46:23.573392 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 12 23:46:23.602550 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:46:23.604675 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 12 23:46:23.625452 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:46:23.626365 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:46:23.627824 systemd[1]: Stopped target timers.target - Timer Units. Aug 12 23:46:23.629081 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 12 23:46:23.629204 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 12 23:46:23.630986 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 12 23:46:23.632431 systemd[1]: Stopped target basic.target - Basic System. Aug 12 23:46:23.633620 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 12 23:46:23.634864 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 12 23:46:23.636241 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 12 23:46:23.637676 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 12 23:46:23.639024 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 12 23:46:23.640352 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 12 23:46:23.641788 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 12 23:46:23.643166 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 12 23:46:23.644426 systemd[1]: Stopped target swap.target - Swaps. Aug 12 23:46:23.645512 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 12 23:46:23.645620 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 12 23:46:23.647411 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:46:23.648836 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:46:23.650215 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 12 23:46:23.651671 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:46:23.652615 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 12 23:46:23.652733 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 12 23:46:23.654802 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 12 23:46:23.654916 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 12 23:46:23.656343 systemd[1]: Stopped target paths.target - Path Units. Aug 12 23:46:23.657479 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 12 23:46:23.658271 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:46:23.659711 systemd[1]: Stopped target slices.target - Slice Units. Aug 12 23:46:23.660912 systemd[1]: Stopped target sockets.target - Socket Units. Aug 12 23:46:23.662431 systemd[1]: iscsid.socket: Deactivated successfully. Aug 12 23:46:23.662512 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 12 23:46:23.663607 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 12 23:46:23.663696 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 12 23:46:23.664846 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 12 23:46:23.664956 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 12 23:46:23.666135 systemd[1]: ignition-files.service: Deactivated successfully. Aug 12 23:46:23.666253 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 12 23:46:23.668067 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 12 23:46:23.669046 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 12 23:46:23.669167 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:46:23.671299 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 12 23:46:23.671913 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 12 23:46:23.672029 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:46:23.673343 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 12 23:46:23.673438 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 12 23:46:23.677854 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 12 23:46:23.681324 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 12 23:46:23.689089 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 12 23:46:23.693039 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 12 23:46:23.693131 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 12 23:46:23.695014 ignition[1042]: INFO : Ignition 2.21.0 Aug 12 23:46:23.695014 ignition[1042]: INFO : Stage: umount Aug 12 23:46:23.695014 ignition[1042]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 12 23:46:23.695014 ignition[1042]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 12 23:46:23.697929 ignition[1042]: INFO : umount: umount passed Aug 12 23:46:23.697929 ignition[1042]: INFO : Ignition finished successfully Aug 12 23:46:23.698381 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 12 23:46:23.698467 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 12 23:46:23.699390 systemd[1]: Stopped target network.target - Network. Aug 12 23:46:23.701284 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 12 23:46:23.701342 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 12 23:46:23.702499 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 12 23:46:23.702537 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 12 23:46:23.703751 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 12 23:46:23.703797 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 12 23:46:23.704946 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 12 23:46:23.704980 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 12 23:46:23.706323 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 12 23:46:23.706367 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 12 23:46:23.707770 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 12 23:46:23.709042 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 12 23:46:23.713659 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 12 23:46:23.713763 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 12 23:46:23.717303 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 12 23:46:23.718486 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 12 23:46:23.718524 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:46:23.721182 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 12 23:46:23.721389 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 12 23:46:23.721478 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 12 23:46:23.724906 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 12 23:46:23.725287 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 12 23:46:23.726617 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 12 23:46:23.726664 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:46:23.728764 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 12 23:46:23.729930 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 12 23:46:23.729975 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 12 23:46:23.731376 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 12 23:46:23.731414 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:46:23.733506 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 12 23:46:23.733545 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 12 23:46:23.734985 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:46:23.736943 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 12 23:46:23.751048 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 12 23:46:23.751153 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 12 23:46:23.752675 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 12 23:46:23.752800 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:46:23.754388 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 12 23:46:23.754448 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 12 23:46:23.755236 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 12 23:46:23.755265 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:46:23.756760 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 12 23:46:23.756802 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 12 23:46:23.758787 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 12 23:46:23.758829 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 12 23:46:23.760708 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 12 23:46:23.760752 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 12 23:46:23.763524 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 12 23:46:23.764825 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 12 23:46:23.764873 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:46:23.767130 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 12 23:46:23.767167 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:46:23.769447 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 12 23:46:23.769485 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:46:23.777767 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 12 23:46:23.777876 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 12 23:46:23.779428 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 12 23:46:23.781293 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 12 23:46:23.801391 systemd[1]: Switching root. Aug 12 23:46:23.832298 systemd-journald[245]: Journal stopped Aug 12 23:46:24.554876 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Aug 12 23:46:24.554925 kernel: SELinux: policy capability network_peer_controls=1 Aug 12 23:46:24.554937 kernel: SELinux: policy capability open_perms=1 Aug 12 23:46:24.554947 kernel: SELinux: policy capability extended_socket_class=1 Aug 12 23:46:24.554956 kernel: SELinux: policy capability always_check_network=0 Aug 12 23:46:24.554965 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 12 23:46:24.554974 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 12 23:46:24.554983 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 12 23:46:24.554998 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 12 23:46:24.555009 kernel: SELinux: policy capability userspace_initial_context=0 Aug 12 23:46:24.555019 kernel: audit: type=1403 audit(1755042383.998:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 12 23:46:24.555033 systemd[1]: Successfully loaded SELinux policy in 51.153ms. Aug 12 23:46:24.555048 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.259ms. Aug 12 23:46:24.555059 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 12 23:46:24.555070 systemd[1]: Detected virtualization kvm. Aug 12 23:46:24.555079 systemd[1]: Detected architecture arm64. Aug 12 23:46:24.555092 systemd[1]: Detected first boot. Aug 12 23:46:24.555102 systemd[1]: Initializing machine ID from VM UUID. Aug 12 23:46:24.555111 zram_generator::config[1087]: No configuration found. Aug 12 23:46:24.555123 kernel: NET: Registered PF_VSOCK protocol family Aug 12 23:46:24.555132 systemd[1]: Populated /etc with preset unit settings. Aug 12 23:46:24.555146 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 12 23:46:24.555155 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 12 23:46:24.555165 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 12 23:46:24.555175 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 12 23:46:24.555207 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 12 23:46:24.555220 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 12 23:46:24.555231 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 12 23:46:24.555243 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 12 23:46:24.555253 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 12 23:46:24.555263 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 12 23:46:24.555272 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 12 23:46:24.555282 systemd[1]: Created slice user.slice - User and Session Slice. Aug 12 23:46:24.555292 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 12 23:46:24.555304 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 12 23:46:24.555314 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 12 23:46:24.555325 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 12 23:46:24.555335 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 12 23:46:24.555345 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 12 23:46:24.555354 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 12 23:46:24.555364 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 12 23:46:24.555374 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 12 23:46:24.555383 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 12 23:46:24.555393 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 12 23:46:24.555404 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 12 23:46:24.555415 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 12 23:46:24.555425 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 12 23:46:24.555434 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 12 23:46:24.555444 systemd[1]: Reached target slices.target - Slice Units. Aug 12 23:46:24.555454 systemd[1]: Reached target swap.target - Swaps. Aug 12 23:46:24.555463 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 12 23:46:24.555473 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 12 23:46:24.555483 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 12 23:46:24.555500 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 12 23:46:24.555510 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 12 23:46:24.555521 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 12 23:46:24.555531 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 12 23:46:24.555541 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 12 23:46:24.555551 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 12 23:46:24.555563 systemd[1]: Mounting media.mount - External Media Directory... Aug 12 23:46:24.555573 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 12 23:46:24.555582 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 12 23:46:24.555594 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 12 23:46:24.555604 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 12 23:46:24.555614 systemd[1]: Reached target machines.target - Containers. Aug 12 23:46:24.555629 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 12 23:46:24.555641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:46:24.555651 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 12 23:46:24.555661 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 12 23:46:24.555671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:46:24.555681 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:46:24.555692 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:46:24.555702 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 12 23:46:24.555712 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:46:24.555722 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 12 23:46:24.555731 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 12 23:46:24.555746 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 12 23:46:24.555756 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 12 23:46:24.555766 systemd[1]: Stopped systemd-fsck-usr.service. Aug 12 23:46:24.555779 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:46:24.555789 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 12 23:46:24.555799 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 12 23:46:24.555808 kernel: loop: module loaded Aug 12 23:46:24.555817 kernel: fuse: init (API version 7.41) Aug 12 23:46:24.555827 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 12 23:46:24.555837 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 12 23:46:24.555846 kernel: ACPI: bus type drm_connector registered Aug 12 23:46:24.555856 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 12 23:46:24.555867 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 12 23:46:24.555877 systemd[1]: verity-setup.service: Deactivated successfully. Aug 12 23:46:24.555887 systemd[1]: Stopped verity-setup.service. Aug 12 23:46:24.555899 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 12 23:46:24.555927 systemd-journald[1155]: Collecting audit messages is disabled. Aug 12 23:46:24.555952 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 12 23:46:24.555962 systemd-journald[1155]: Journal started Aug 12 23:46:24.555982 systemd-journald[1155]: Runtime Journal (/run/log/journal/85d0fa39897f49ec9775ead503618bea) is 6M, max 48.5M, 42.4M free. Aug 12 23:46:24.374827 systemd[1]: Queued start job for default target multi-user.target. Aug 12 23:46:24.392072 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 12 23:46:24.392463 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 12 23:46:24.557684 systemd[1]: Started systemd-journald.service - Journal Service. Aug 12 23:46:24.558742 systemd[1]: Mounted media.mount - External Media Directory. Aug 12 23:46:24.560310 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 12 23:46:24.561165 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 12 23:46:24.562107 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 12 23:46:24.564219 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 12 23:46:24.565345 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 12 23:46:24.565499 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 12 23:46:24.566621 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:46:24.566789 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:46:24.567886 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 12 23:46:24.569078 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:46:24.569246 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:46:24.570255 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:46:24.570413 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:46:24.571460 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 12 23:46:24.571613 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 12 23:46:24.572716 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:46:24.572861 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:46:24.573972 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 12 23:46:24.575097 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 12 23:46:24.576485 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 12 23:46:24.577716 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 12 23:46:24.589046 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 12 23:46:24.591203 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 12 23:46:24.592860 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 12 23:46:24.593747 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 12 23:46:24.593780 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 12 23:46:24.595431 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 12 23:46:24.603974 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 12 23:46:24.604849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:46:24.605858 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 12 23:46:24.607691 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 12 23:46:24.608730 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:46:24.610825 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 12 23:46:24.611800 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:46:24.613854 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 12 23:46:24.619888 systemd-journald[1155]: Time spent on flushing to /var/log/journal/85d0fa39897f49ec9775ead503618bea is 17.908ms for 877 entries. Aug 12 23:46:24.619888 systemd-journald[1155]: System Journal (/var/log/journal/85d0fa39897f49ec9775ead503618bea) is 8M, max 195.6M, 187.6M free. Aug 12 23:46:24.642137 systemd-journald[1155]: Received client request to flush runtime journal. Aug 12 23:46:24.620027 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 12 23:46:24.623452 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 12 23:46:24.633354 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 12 23:46:24.635418 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 12 23:46:24.636594 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 12 23:46:24.637824 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 12 23:46:24.640211 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 12 23:46:24.644017 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 12 23:46:24.650104 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 12 23:46:24.651502 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 12 23:46:24.658915 kernel: loop0: detected capacity change from 0 to 138376 Aug 12 23:46:24.662496 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 12 23:46:24.666333 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 12 23:46:24.674550 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 12 23:46:24.685245 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 12 23:46:24.691210 kernel: loop1: detected capacity change from 0 to 207008 Aug 12 23:46:24.691886 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Aug 12 23:46:24.691903 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Aug 12 23:46:24.696721 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 12 23:46:24.708201 kernel: loop2: detected capacity change from 0 to 107312 Aug 12 23:46:24.738223 kernel: loop3: detected capacity change from 0 to 138376 Aug 12 23:46:24.746207 kernel: loop4: detected capacity change from 0 to 207008 Aug 12 23:46:24.753213 kernel: loop5: detected capacity change from 0 to 107312 Aug 12 23:46:24.757094 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 12 23:46:24.757478 (sd-merge)[1228]: Merged extensions into '/usr'. Aug 12 23:46:24.761041 systemd[1]: Reload requested from client PID 1203 ('systemd-sysext') (unit systemd-sysext.service)... Aug 12 23:46:24.761351 systemd[1]: Reloading... Aug 12 23:46:24.810214 zram_generator::config[1251]: No configuration found. Aug 12 23:46:24.883248 ldconfig[1198]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 12 23:46:24.889150 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:46:24.950600 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 12 23:46:24.950847 systemd[1]: Reloading finished in 188 ms. Aug 12 23:46:24.982574 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 12 23:46:24.983714 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 12 23:46:25.009439 systemd[1]: Starting ensure-sysext.service... Aug 12 23:46:25.011037 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 12 23:46:25.025751 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 12 23:46:25.026164 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 12 23:46:25.026563 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 12 23:46:25.026839 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 12 23:46:25.027546 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 12 23:46:25.027851 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Aug 12 23:46:25.027957 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Aug 12 23:46:25.030448 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:46:25.030552 systemd-tmpfiles[1289]: Skipping /boot Aug 12 23:46:25.034117 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Aug 12 23:46:25.034132 systemd[1]: Reloading... Aug 12 23:46:25.039481 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Aug 12 23:46:25.039588 systemd-tmpfiles[1289]: Skipping /boot Aug 12 23:46:25.073209 zram_generator::config[1316]: No configuration found. Aug 12 23:46:25.143289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:46:25.205381 systemd[1]: Reloading finished in 170 ms. Aug 12 23:46:25.227566 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 12 23:46:25.228791 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 12 23:46:25.246158 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:46:25.248200 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 12 23:46:25.249966 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 12 23:46:25.255331 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 12 23:46:25.257526 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 12 23:46:25.263815 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 12 23:46:25.275841 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 12 23:46:25.279119 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 12 23:46:25.284658 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:46:25.288956 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:46:25.292524 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:46:25.295481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:46:25.297546 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:46:25.297754 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:46:25.298956 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 12 23:46:25.301730 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 12 23:46:25.304969 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:46:25.305112 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:46:25.309792 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:46:25.309939 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:46:25.311387 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:46:25.311527 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:46:25.312886 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 12 23:46:25.317283 systemd-udevd[1358]: Using default interface naming scheme 'v255'. Aug 12 23:46:25.317979 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 12 23:46:25.323294 augenrules[1389]: No rules Aug 12 23:46:25.324026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:46:25.325469 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:46:25.327223 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:46:25.334149 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:46:25.334951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:46:25.335118 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:46:25.335227 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:46:25.336089 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 12 23:46:25.344983 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 12 23:46:25.346491 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:46:25.346676 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:46:25.347974 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:46:25.348102 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:46:25.349529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:46:25.349737 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:46:25.351534 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:46:25.351721 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:46:25.363895 systemd[1]: Finished ensure-sysext.service. Aug 12 23:46:25.372146 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:46:25.372979 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 12 23:46:25.373837 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 12 23:46:25.377065 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 12 23:46:25.378921 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 12 23:46:25.381179 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 12 23:46:25.382395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 12 23:46:25.382436 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 12 23:46:25.385344 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 12 23:46:25.389451 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 12 23:46:25.392268 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 12 23:46:25.406638 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 12 23:46:25.409227 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 12 23:46:25.409398 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 12 23:46:25.412753 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 12 23:46:25.412909 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 12 23:46:25.414647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 12 23:46:25.414898 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 12 23:46:25.416628 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 12 23:46:25.418531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 12 23:46:25.418707 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 12 23:46:25.424333 augenrules[1434]: /sbin/augenrules: No change Aug 12 23:46:25.426442 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 12 23:46:25.433724 augenrules[1468]: No rules Aug 12 23:46:25.434944 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:46:25.435641 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:46:25.464733 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 12 23:46:25.466808 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 12 23:46:25.504017 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 12 23:46:25.547230 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 12 23:46:25.558754 systemd-networkd[1439]: lo: Link UP Aug 12 23:46:25.558763 systemd-networkd[1439]: lo: Gained carrier Aug 12 23:46:25.559567 systemd-networkd[1439]: Enumeration completed Aug 12 23:46:25.559921 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 12 23:46:25.559975 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:46:25.559978 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 12 23:46:25.560479 systemd-networkd[1439]: eth0: Link UP Aug 12 23:46:25.560572 systemd-networkd[1439]: eth0: Gained carrier Aug 12 23:46:25.560584 systemd-networkd[1439]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 12 23:46:25.562328 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 12 23:46:25.567420 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 12 23:46:25.578432 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 12 23:46:25.579416 systemd[1]: Reached target time-set.target - System Time Set. Aug 12 23:46:25.582232 systemd-networkd[1439]: eth0: DHCPv4 address 10.0.0.67/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 12 23:46:25.584815 systemd-resolved[1356]: Positive Trust Anchors: Aug 12 23:46:25.587091 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Aug 12 23:46:25.590217 systemd-resolved[1356]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 12 23:46:25.590258 systemd-resolved[1356]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 12 23:46:25.190934 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 12 23:46:25.196867 systemd-journald[1155]: Time jumped backwards, rotating. Aug 12 23:46:25.190987 systemd-timesyncd[1441]: Initial clock synchronization to Tue 2025-08-12 23:46:25.190845 UTC. Aug 12 23:46:25.195024 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 12 23:46:25.202966 systemd-resolved[1356]: Defaulting to hostname 'linux'. Aug 12 23:46:25.209860 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 12 23:46:25.210777 systemd[1]: Reached target network.target - Network. Aug 12 23:46:25.211413 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 12 23:46:25.220836 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 12 23:46:25.221918 systemd[1]: Reached target sysinit.target - System Initialization. Aug 12 23:46:25.222854 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 12 23:46:25.223784 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 12 23:46:25.224977 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 12 23:46:25.225882 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 12 23:46:25.226851 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 12 23:46:25.227741 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 12 23:46:25.227772 systemd[1]: Reached target paths.target - Path Units. Aug 12 23:46:25.228410 systemd[1]: Reached target timers.target - Timer Units. Aug 12 23:46:25.229942 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 12 23:46:25.231975 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 12 23:46:25.234941 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 12 23:46:25.236108 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 12 23:46:25.236999 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 12 23:46:25.243974 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 12 23:46:25.245369 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 12 23:46:25.246767 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 12 23:46:25.247661 systemd[1]: Reached target sockets.target - Socket Units. Aug 12 23:46:25.248358 systemd[1]: Reached target basic.target - Basic System. Aug 12 23:46:25.249037 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:46:25.249067 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 12 23:46:25.250042 systemd[1]: Starting containerd.service - containerd container runtime... Aug 12 23:46:25.251785 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 12 23:46:25.253423 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 12 23:46:25.255140 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 12 23:46:25.257817 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 12 23:46:25.258703 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 12 23:46:25.259776 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 12 23:46:25.265195 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 12 23:46:25.269335 jq[1509]: false Aug 12 23:46:25.268594 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 12 23:46:25.270673 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 12 23:46:25.275016 extend-filesystems[1510]: Found /dev/vda6 Aug 12 23:46:25.275233 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 12 23:46:25.279706 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 12 23:46:25.280209 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 12 23:46:25.280799 systemd[1]: Starting update-engine.service - Update Engine... Aug 12 23:46:25.282900 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 12 23:46:25.284920 extend-filesystems[1510]: Found /dev/vda9 Aug 12 23:46:25.285883 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 12 23:46:25.287199 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 12 23:46:25.287411 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 12 23:46:25.287691 systemd[1]: motdgen.service: Deactivated successfully. Aug 12 23:46:25.287873 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 12 23:46:25.291099 jq[1528]: true Aug 12 23:46:25.290693 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 12 23:46:25.291276 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 12 23:46:25.296142 extend-filesystems[1510]: Checking size of /dev/vda9 Aug 12 23:46:25.307101 (ntainerd)[1536]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 12 23:46:25.308610 extend-filesystems[1510]: Resized partition /dev/vda9 Aug 12 23:46:25.315613 jq[1533]: true Aug 12 23:46:25.319522 tar[1532]: linux-arm64/LICENSE Aug 12 23:46:25.319522 tar[1532]: linux-arm64/helm Aug 12 23:46:25.330100 extend-filesystems[1548]: resize2fs 1.47.2 (1-Jan-2025) Aug 12 23:46:25.333152 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 12 23:46:25.346952 update_engine[1526]: I20250812 23:46:25.346818 1526 main.cc:92] Flatcar Update Engine starting Aug 12 23:46:25.357383 dbus-daemon[1507]: [system] SELinux support is enabled Aug 12 23:46:25.357563 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 12 23:46:25.361059 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 12 23:46:25.361118 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 12 23:46:25.361289 update_engine[1526]: I20250812 23:46:25.361251 1526 update_check_scheduler.cc:74] Next update check in 7m3s Aug 12 23:46:25.362926 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 12 23:46:25.362950 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 12 23:46:25.367740 systemd[1]: Started update-engine.service - Update Engine. Aug 12 23:46:25.370346 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 12 23:46:25.372552 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 12 23:46:25.386423 extend-filesystems[1548]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 12 23:46:25.386423 extend-filesystems[1548]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 12 23:46:25.386423 extend-filesystems[1548]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 12 23:46:25.390819 extend-filesystems[1510]: Resized filesystem in /dev/vda9 Aug 12 23:46:25.389127 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 12 23:46:25.393474 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 12 23:46:25.394785 bash[1566]: Updated "/home/core/.ssh/authorized_keys" Aug 12 23:46:25.398130 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 12 23:46:25.399444 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 12 23:46:25.411539 systemd-logind[1520]: Watching system buttons on /dev/input/event0 (Power Button) Aug 12 23:46:25.412120 systemd-logind[1520]: New seat seat0. Aug 12 23:46:25.414763 systemd[1]: Started systemd-logind.service - User Login Management. Aug 12 23:46:25.432639 locksmithd[1567]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 12 23:46:25.549717 containerd[1536]: time="2025-08-12T23:46:25Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 12 23:46:25.552566 containerd[1536]: time="2025-08-12T23:46:25.552391917Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 12 23:46:25.563248 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 12 23:46:25.565981 containerd[1536]: time="2025-08-12T23:46:25.565944597Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.72µs" Aug 12 23:46:25.566102 containerd[1536]: time="2025-08-12T23:46:25.566045717Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 12 23:46:25.566102 containerd[1536]: time="2025-08-12T23:46:25.566069557Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 12 23:46:25.567743 containerd[1536]: time="2025-08-12T23:46:25.567711917Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 12 23:46:25.567789 containerd[1536]: time="2025-08-12T23:46:25.567744757Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 12 23:46:25.567789 containerd[1536]: time="2025-08-12T23:46:25.567773277Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 12 23:46:25.567919 containerd[1536]: time="2025-08-12T23:46:25.567823597Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 12 23:46:25.567919 containerd[1536]: time="2025-08-12T23:46:25.567842677Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 12 23:46:25.568066 containerd[1536]: time="2025-08-12T23:46:25.568043317Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 12 23:46:25.568066 containerd[1536]: time="2025-08-12T23:46:25.568062797Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 12 23:46:25.568119 containerd[1536]: time="2025-08-12T23:46:25.568074597Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 12 23:46:25.568119 containerd[1536]: time="2025-08-12T23:46:25.568099237Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 12 23:46:25.568188 containerd[1536]: time="2025-08-12T23:46:25.568171797Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 12 23:46:25.568366 containerd[1536]: time="2025-08-12T23:46:25.568348517Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 12 23:46:25.568394 containerd[1536]: time="2025-08-12T23:46:25.568381197Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 12 23:46:25.568422 containerd[1536]: time="2025-08-12T23:46:25.568393877Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 12 23:46:25.568440 containerd[1536]: time="2025-08-12T23:46:25.568421997Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 12 23:46:25.568773 containerd[1536]: time="2025-08-12T23:46:25.568753437Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 12 23:46:25.568842 containerd[1536]: time="2025-08-12T23:46:25.568825837Z" level=info msg="metadata content store policy set" policy=shared Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572578117Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572626677Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572649277Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572662597Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572680877Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572691237Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572701637Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572713237Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572725437Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572735957Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572746037Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572760517Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572861517Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 12 23:46:25.572617 containerd[1536]: time="2025-08-12T23:46:25.572883197Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.572896837Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.572907157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.572916757Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.572927157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.572938237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.572948037Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.572958797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.572968957Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.572978557Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.573214397Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.573232797Z" level=info msg="Start snapshots syncer" Aug 12 23:46:25.573592 containerd[1536]: time="2025-08-12T23:46:25.573267797Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 12 23:46:25.573772 containerd[1536]: time="2025-08-12T23:46:25.573589917Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 12 23:46:25.573772 containerd[1536]: time="2025-08-12T23:46:25.573640477Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 12 23:46:25.573858 containerd[1536]: time="2025-08-12T23:46:25.573702557Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 12 23:46:25.573858 containerd[1536]: time="2025-08-12T23:46:25.573792997Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 12 23:46:25.573858 containerd[1536]: time="2025-08-12T23:46:25.573815597Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 12 23:46:25.573858 containerd[1536]: time="2025-08-12T23:46:25.573827117Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 12 23:46:25.573858 containerd[1536]: time="2025-08-12T23:46:25.573837277Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 12 23:46:25.573858 containerd[1536]: time="2025-08-12T23:46:25.573848437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 12 23:46:25.573858 containerd[1536]: time="2025-08-12T23:46:25.573858797Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 12 23:46:25.573995 containerd[1536]: time="2025-08-12T23:46:25.573870117Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 12 23:46:25.573995 containerd[1536]: time="2025-08-12T23:46:25.573893557Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 12 23:46:25.573995 containerd[1536]: time="2025-08-12T23:46:25.573904877Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 12 23:46:25.573995 containerd[1536]: time="2025-08-12T23:46:25.573914837Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 12 23:46:25.573995 containerd[1536]: time="2025-08-12T23:46:25.573947317Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 12 23:46:25.573995 containerd[1536]: time="2025-08-12T23:46:25.573959797Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 12 23:46:25.573995 containerd[1536]: time="2025-08-12T23:46:25.573968317Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 12 23:46:25.573995 containerd[1536]: time="2025-08-12T23:46:25.573977317Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 12 23:46:25.573995 containerd[1536]: time="2025-08-12T23:46:25.573985597Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 12 23:46:25.574165 containerd[1536]: time="2025-08-12T23:46:25.573999077Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 12 23:46:25.574165 containerd[1536]: time="2025-08-12T23:46:25.574009957Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 12 23:46:25.574165 containerd[1536]: time="2025-08-12T23:46:25.574104757Z" level=info msg="runtime interface created" Aug 12 23:46:25.574165 containerd[1536]: time="2025-08-12T23:46:25.574110797Z" level=info msg="created NRI interface" Aug 12 23:46:25.574165 containerd[1536]: time="2025-08-12T23:46:25.574124517Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 12 23:46:25.574165 containerd[1536]: time="2025-08-12T23:46:25.574135277Z" level=info msg="Connect containerd service" Aug 12 23:46:25.574165 containerd[1536]: time="2025-08-12T23:46:25.574158357Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 12 23:46:25.574935 containerd[1536]: time="2025-08-12T23:46:25.574896717Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 12 23:46:25.587242 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 12 23:46:25.590851 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 12 23:46:25.609778 systemd[1]: issuegen.service: Deactivated successfully. Aug 12 23:46:25.611139 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 12 23:46:25.613475 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 12 23:46:25.635271 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 12 23:46:25.641472 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 12 23:46:25.644287 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 12 23:46:25.645602 systemd[1]: Reached target getty.target - Login Prompts. Aug 12 23:46:25.692427 containerd[1536]: time="2025-08-12T23:46:25.692358477Z" level=info msg="Start subscribing containerd event" Aug 12 23:46:25.692547 containerd[1536]: time="2025-08-12T23:46:25.692440357Z" level=info msg="Start recovering state" Aug 12 23:46:25.692547 containerd[1536]: time="2025-08-12T23:46:25.692534757Z" level=info msg="Start event monitor" Aug 12 23:46:25.692584 containerd[1536]: time="2025-08-12T23:46:25.692563437Z" level=info msg="Start cni network conf syncer for default" Aug 12 23:46:25.692584 containerd[1536]: time="2025-08-12T23:46:25.692571557Z" level=info msg="Start streaming server" Aug 12 23:46:25.692584 containerd[1536]: time="2025-08-12T23:46:25.692581677Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 12 23:46:25.692629 containerd[1536]: time="2025-08-12T23:46:25.692589477Z" level=info msg="runtime interface starting up..." Aug 12 23:46:25.692629 containerd[1536]: time="2025-08-12T23:46:25.692595557Z" level=info msg="starting plugins..." Aug 12 23:46:25.692629 containerd[1536]: time="2025-08-12T23:46:25.692608837Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 12 23:46:25.692941 containerd[1536]: time="2025-08-12T23:46:25.692923757Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 12 23:46:25.692982 containerd[1536]: time="2025-08-12T23:46:25.692969957Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 12 23:46:25.693113 systemd[1]: Started containerd.service - containerd container runtime. Aug 12 23:46:25.694851 containerd[1536]: time="2025-08-12T23:46:25.694815957Z" level=info msg="containerd successfully booted in 0.145603s" Aug 12 23:46:25.737283 tar[1532]: linux-arm64/README.md Aug 12 23:46:25.753997 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 12 23:46:26.221233 systemd-networkd[1439]: eth0: Gained IPv6LL Aug 12 23:46:26.225158 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 12 23:46:26.226654 systemd[1]: Reached target network-online.target - Network is Online. Aug 12 23:46:26.228740 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 12 23:46:26.230700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:46:26.232476 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 12 23:46:26.266431 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 12 23:46:26.267658 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 12 23:46:26.267875 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 12 23:46:26.269726 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 12 23:46:26.789838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:46:26.791046 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 12 23:46:26.791971 systemd[1]: Startup finished in 2.109s (kernel) + 5.356s (initrd) + 3.248s (userspace) = 10.714s. Aug 12 23:46:26.793373 (kubelet)[1639]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:46:27.175492 kubelet[1639]: E0812 23:46:27.175401 1639 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:46:27.177804 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:46:27.177940 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:46:27.178265 systemd[1]: kubelet.service: Consumed 790ms CPU time, 256.6M memory peak. Aug 12 23:46:31.582279 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 12 23:46:31.583342 systemd[1]: Started sshd@0-10.0.0.67:22-10.0.0.1:53434.service - OpenSSH per-connection server daemon (10.0.0.1:53434). Aug 12 23:46:31.673234 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 53434 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:46:31.674916 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:46:31.680359 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 12 23:46:31.681224 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 12 23:46:31.686478 systemd-logind[1520]: New session 1 of user core. Aug 12 23:46:31.698254 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 12 23:46:31.702693 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 12 23:46:31.725861 (systemd)[1656]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 12 23:46:31.727829 systemd-logind[1520]: New session c1 of user core. Aug 12 23:46:31.841630 systemd[1656]: Queued start job for default target default.target. Aug 12 23:46:31.852423 systemd[1656]: Created slice app.slice - User Application Slice. Aug 12 23:46:31.852455 systemd[1656]: Reached target paths.target - Paths. Aug 12 23:46:31.852488 systemd[1656]: Reached target timers.target - Timers. Aug 12 23:46:31.853618 systemd[1656]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 12 23:46:31.861785 systemd[1656]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 12 23:46:31.861914 systemd[1656]: Reached target sockets.target - Sockets. Aug 12 23:46:31.862019 systemd[1656]: Reached target basic.target - Basic System. Aug 12 23:46:31.862167 systemd[1656]: Reached target default.target - Main User Target. Aug 12 23:46:31.862224 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 12 23:46:31.862842 systemd[1656]: Startup finished in 129ms. Aug 12 23:46:31.863340 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 12 23:46:31.928125 systemd[1]: Started sshd@1-10.0.0.67:22-10.0.0.1:53444.service - OpenSSH per-connection server daemon (10.0.0.1:53444). Aug 12 23:46:31.977940 sshd[1668]: Accepted publickey for core from 10.0.0.1 port 53444 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:46:31.979069 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:46:31.983651 systemd-logind[1520]: New session 2 of user core. Aug 12 23:46:31.997288 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 12 23:46:32.046269 sshd[1670]: Connection closed by 10.0.0.1 port 53444 Aug 12 23:46:32.046656 sshd-session[1668]: pam_unix(sshd:session): session closed for user core Aug 12 23:46:32.062831 systemd[1]: sshd@1-10.0.0.67:22-10.0.0.1:53444.service: Deactivated successfully. Aug 12 23:46:32.065192 systemd[1]: session-2.scope: Deactivated successfully. Aug 12 23:46:32.065776 systemd-logind[1520]: Session 2 logged out. Waiting for processes to exit. Aug 12 23:46:32.069268 systemd[1]: Started sshd@2-10.0.0.67:22-10.0.0.1:53446.service - OpenSSH per-connection server daemon (10.0.0.1:53446). Aug 12 23:46:32.069708 systemd-logind[1520]: Removed session 2. Aug 12 23:46:32.110368 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 53446 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:46:32.111611 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:46:32.114879 systemd-logind[1520]: New session 3 of user core. Aug 12 23:46:32.127196 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 12 23:46:32.174132 sshd[1678]: Connection closed by 10.0.0.1 port 53446 Aug 12 23:46:32.174146 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Aug 12 23:46:32.195841 systemd[1]: sshd@2-10.0.0.67:22-10.0.0.1:53446.service: Deactivated successfully. Aug 12 23:46:32.198203 systemd[1]: session-3.scope: Deactivated successfully. Aug 12 23:46:32.199347 systemd-logind[1520]: Session 3 logged out. Waiting for processes to exit. Aug 12 23:46:32.200943 systemd[1]: Started sshd@3-10.0.0.67:22-10.0.0.1:53452.service - OpenSSH per-connection server daemon (10.0.0.1:53452). Aug 12 23:46:32.201688 systemd-logind[1520]: Removed session 3. Aug 12 23:46:32.252063 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 53452 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:46:32.253204 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:46:32.256655 systemd-logind[1520]: New session 4 of user core. Aug 12 23:46:32.264202 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 12 23:46:32.314265 sshd[1686]: Connection closed by 10.0.0.1 port 53452 Aug 12 23:46:32.314561 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Aug 12 23:46:32.330045 systemd[1]: sshd@3-10.0.0.67:22-10.0.0.1:53452.service: Deactivated successfully. Aug 12 23:46:32.331623 systemd[1]: session-4.scope: Deactivated successfully. Aug 12 23:46:32.332368 systemd-logind[1520]: Session 4 logged out. Waiting for processes to exit. Aug 12 23:46:32.335300 systemd[1]: Started sshd@4-10.0.0.67:22-10.0.0.1:53464.service - OpenSSH per-connection server daemon (10.0.0.1:53464). Aug 12 23:46:32.335942 systemd-logind[1520]: Removed session 4. Aug 12 23:46:32.388060 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 53464 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:46:32.389192 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:46:32.392874 systemd-logind[1520]: New session 5 of user core. Aug 12 23:46:32.400224 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 12 23:46:32.459594 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 12 23:46:32.459845 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:46:32.474693 sudo[1695]: pam_unix(sudo:session): session closed for user root Aug 12 23:46:32.476034 sshd[1694]: Connection closed by 10.0.0.1 port 53464 Aug 12 23:46:32.476388 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Aug 12 23:46:32.491214 systemd[1]: sshd@4-10.0.0.67:22-10.0.0.1:53464.service: Deactivated successfully. Aug 12 23:46:32.492570 systemd[1]: session-5.scope: Deactivated successfully. Aug 12 23:46:32.494685 systemd-logind[1520]: Session 5 logged out. Waiting for processes to exit. Aug 12 23:46:32.498264 systemd[1]: Started sshd@5-10.0.0.67:22-10.0.0.1:50720.service - OpenSSH per-connection server daemon (10.0.0.1:50720). Aug 12 23:46:32.498708 systemd-logind[1520]: Removed session 5. Aug 12 23:46:32.547533 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 50720 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:46:32.548742 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:46:32.552314 systemd-logind[1520]: New session 6 of user core. Aug 12 23:46:32.560239 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 12 23:46:32.609843 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 12 23:46:32.610339 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:46:32.681718 sudo[1705]: pam_unix(sudo:session): session closed for user root Aug 12 23:46:32.686718 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 12 23:46:32.686993 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:46:32.695069 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 12 23:46:32.733829 augenrules[1727]: No rules Aug 12 23:46:32.735330 systemd[1]: audit-rules.service: Deactivated successfully. Aug 12 23:46:32.736197 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 12 23:46:32.737163 sudo[1704]: pam_unix(sudo:session): session closed for user root Aug 12 23:46:32.739834 sshd[1703]: Connection closed by 10.0.0.1 port 50720 Aug 12 23:46:32.740236 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Aug 12 23:46:32.750894 systemd[1]: sshd@5-10.0.0.67:22-10.0.0.1:50720.service: Deactivated successfully. Aug 12 23:46:32.752544 systemd[1]: session-6.scope: Deactivated successfully. Aug 12 23:46:32.754562 systemd-logind[1520]: Session 6 logged out. Waiting for processes to exit. Aug 12 23:46:32.756726 systemd[1]: Started sshd@6-10.0.0.67:22-10.0.0.1:50728.service - OpenSSH per-connection server daemon (10.0.0.1:50728). Aug 12 23:46:32.757665 systemd-logind[1520]: Removed session 6. Aug 12 23:46:32.803360 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 50728 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:46:32.804448 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:46:32.807999 systemd-logind[1520]: New session 7 of user core. Aug 12 23:46:32.815212 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 12 23:46:32.864813 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 12 23:46:32.865064 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 12 23:46:33.224503 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 12 23:46:33.238378 (dockerd)[1759]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 12 23:46:33.502502 dockerd[1759]: time="2025-08-12T23:46:33.502380917Z" level=info msg="Starting up" Aug 12 23:46:33.504031 dockerd[1759]: time="2025-08-12T23:46:33.503981997Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 12 23:46:33.539699 dockerd[1759]: time="2025-08-12T23:46:33.539656157Z" level=info msg="Loading containers: start." Aug 12 23:46:33.547108 kernel: Initializing XFRM netlink socket Aug 12 23:46:33.729515 systemd-networkd[1439]: docker0: Link UP Aug 12 23:46:33.732169 dockerd[1759]: time="2025-08-12T23:46:33.732124597Z" level=info msg="Loading containers: done." Aug 12 23:46:33.743125 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3784573699-merged.mount: Deactivated successfully. Aug 12 23:46:33.744485 dockerd[1759]: time="2025-08-12T23:46:33.744441757Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 12 23:46:33.744550 dockerd[1759]: time="2025-08-12T23:46:33.744525877Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 12 23:46:33.744636 dockerd[1759]: time="2025-08-12T23:46:33.744617637Z" level=info msg="Initializing buildkit" Aug 12 23:46:33.764245 dockerd[1759]: time="2025-08-12T23:46:33.764171717Z" level=info msg="Completed buildkit initialization" Aug 12 23:46:33.770309 dockerd[1759]: time="2025-08-12T23:46:33.770265717Z" level=info msg="Daemon has completed initialization" Aug 12 23:46:33.770398 dockerd[1759]: time="2025-08-12T23:46:33.770342437Z" level=info msg="API listen on /run/docker.sock" Aug 12 23:46:33.770470 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 12 23:46:34.624206 containerd[1536]: time="2025-08-12T23:46:34.624158237Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 12 23:46:35.404167 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2087455865.mount: Deactivated successfully. Aug 12 23:46:36.887519 containerd[1536]: time="2025-08-12T23:46:36.887446197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:36.887990 containerd[1536]: time="2025-08-12T23:46:36.887945677Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=26327783" Aug 12 23:46:36.888624 containerd[1536]: time="2025-08-12T23:46:36.888596797Z" level=info msg="ImageCreate event name:\"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:36.891553 containerd[1536]: time="2025-08-12T23:46:36.891498317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:36.892560 containerd[1536]: time="2025-08-12T23:46:36.892425717Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"26324581\" in 2.26822144s" Aug 12 23:46:36.892560 containerd[1536]: time="2025-08-12T23:46:36.892462397Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:edd0d4592f9097d398a2366cf9c2a86f488742a75ee0a73ebbee00f654b8bb3b\"" Aug 12 23:46:36.893087 containerd[1536]: time="2025-08-12T23:46:36.893055317Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 12 23:46:37.381495 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 12 23:46:37.382953 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:46:37.506722 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:46:37.509879 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:46:37.548967 kubelet[2030]: E0812 23:46:37.548915 2030 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:46:37.551896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:46:37.552031 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:46:37.552611 systemd[1]: kubelet.service: Consumed 140ms CPU time, 108.3M memory peak. Aug 12 23:46:38.331602 containerd[1536]: time="2025-08-12T23:46:38.331545317Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:38.331991 containerd[1536]: time="2025-08-12T23:46:38.331959757Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=22529698" Aug 12 23:46:38.332796 containerd[1536]: time="2025-08-12T23:46:38.332759117Z" level=info msg="ImageCreate event name:\"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:38.335190 containerd[1536]: time="2025-08-12T23:46:38.335137157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:38.336254 containerd[1536]: time="2025-08-12T23:46:38.336227517Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"24065486\" in 1.44309628s" Aug 12 23:46:38.336302 containerd[1536]: time="2025-08-12T23:46:38.336259597Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:d53e0248330cfa27e6cbb5684905015074d9e59688c339b16207055c6d07a103\"" Aug 12 23:46:38.337130 containerd[1536]: time="2025-08-12T23:46:38.337070877Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 12 23:46:39.625842 containerd[1536]: time="2025-08-12T23:46:39.625793717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:39.626738 containerd[1536]: time="2025-08-12T23:46:39.626521397Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=17484140" Aug 12 23:46:39.627478 containerd[1536]: time="2025-08-12T23:46:39.627437117Z" level=info msg="ImageCreate event name:\"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:39.629799 containerd[1536]: time="2025-08-12T23:46:39.629747637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:39.630612 containerd[1536]: time="2025-08-12T23:46:39.630560557Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"19019946\" in 1.29342516s" Aug 12 23:46:39.630612 containerd[1536]: time="2025-08-12T23:46:39.630593757Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:15a3296b1f1ad53bca0584492c05a9be73d836d12ccacb182daab897cbe9ac1e\"" Aug 12 23:46:39.631148 containerd[1536]: time="2025-08-12T23:46:39.631112357Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 12 23:46:40.689685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2873103479.mount: Deactivated successfully. Aug 12 23:46:40.925699 containerd[1536]: time="2025-08-12T23:46:40.925645197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:40.926516 containerd[1536]: time="2025-08-12T23:46:40.926478597Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=27378407" Aug 12 23:46:40.927308 containerd[1536]: time="2025-08-12T23:46:40.927256677Z" level=info msg="ImageCreate event name:\"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:40.929335 containerd[1536]: time="2025-08-12T23:46:40.929299197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:40.930513 containerd[1536]: time="2025-08-12T23:46:40.930384077Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"27377424\" in 1.29922984s" Aug 12 23:46:40.930513 containerd[1536]: time="2025-08-12T23:46:40.930417437Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:176e5fd5af03be683be55601db94020ad4cc275f4cca27999608d3cf65c9fb11\"" Aug 12 23:46:40.930844 containerd[1536]: time="2025-08-12T23:46:40.930821037Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 12 23:46:41.532371 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount532620159.mount: Deactivated successfully. Aug 12 23:46:42.487361 containerd[1536]: time="2025-08-12T23:46:42.487178117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:42.488114 containerd[1536]: time="2025-08-12T23:46:42.488004517Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 12 23:46:42.488870 containerd[1536]: time="2025-08-12T23:46:42.488832597Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:42.491498 containerd[1536]: time="2025-08-12T23:46:42.491460717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:42.492634 containerd[1536]: time="2025-08-12T23:46:42.492578197Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.56172744s" Aug 12 23:46:42.492634 containerd[1536]: time="2025-08-12T23:46:42.492606117Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 12 23:46:42.493146 containerd[1536]: time="2025-08-12T23:46:42.493115477Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 12 23:46:42.994019 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1533841552.mount: Deactivated successfully. Aug 12 23:46:42.998505 containerd[1536]: time="2025-08-12T23:46:42.998450197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:46:42.999172 containerd[1536]: time="2025-08-12T23:46:42.999103917Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 12 23:46:42.999825 containerd[1536]: time="2025-08-12T23:46:42.999790917Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:46:43.005102 containerd[1536]: time="2025-08-12T23:46:43.003237077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 12 23:46:43.005517 containerd[1536]: time="2025-08-12T23:46:43.005458437Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 512.30964ms" Aug 12 23:46:43.005517 containerd[1536]: time="2025-08-12T23:46:43.005503077Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 12 23:46:43.006118 containerd[1536]: time="2025-08-12T23:46:43.005996677Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 12 23:46:43.590186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount926060051.mount: Deactivated successfully. Aug 12 23:46:45.681419 containerd[1536]: time="2025-08-12T23:46:45.681367717Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:45.682339 containerd[1536]: time="2025-08-12T23:46:45.682139797Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Aug 12 23:46:45.683004 containerd[1536]: time="2025-08-12T23:46:45.682972037Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:45.685789 containerd[1536]: time="2025-08-12T23:46:45.685756157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:46:45.686929 containerd[1536]: time="2025-08-12T23:46:45.686887197Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.68084416s" Aug 12 23:46:45.686929 containerd[1536]: time="2025-08-12T23:46:45.686921637Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 12 23:46:47.631694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 12 23:46:47.633400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:46:47.747556 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:46:47.750522 (kubelet)[2196]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 12 23:46:47.782516 kubelet[2196]: E0812 23:46:47.782469 2196 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 12 23:46:47.784981 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 12 23:46:47.785154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 12 23:46:47.785438 systemd[1]: kubelet.service: Consumed 127ms CPU time, 106.6M memory peak. Aug 12 23:46:50.864720 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:46:50.864869 systemd[1]: kubelet.service: Consumed 127ms CPU time, 106.6M memory peak. Aug 12 23:46:50.867141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:46:50.970656 systemd[1]: Reload requested from client PID 2212 ('systemctl') (unit session-7.scope)... Aug 12 23:46:50.970674 systemd[1]: Reloading... Aug 12 23:46:51.051135 zram_generator::config[2261]: No configuration found. Aug 12 23:46:51.132042 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:46:51.217601 systemd[1]: Reloading finished in 246 ms. Aug 12 23:46:51.277485 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 12 23:46:51.277554 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 12 23:46:51.277779 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:46:51.277822 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95.1M memory peak. Aug 12 23:46:51.279194 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:46:51.389668 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:46:51.393415 (kubelet)[2300]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:46:51.427650 kubelet[2300]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:46:51.427650 kubelet[2300]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:46:51.427650 kubelet[2300]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:46:51.427948 kubelet[2300]: I0812 23:46:51.427686 2300 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:46:52.032597 kubelet[2300]: I0812 23:46:52.032557 2300 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:46:52.032597 kubelet[2300]: I0812 23:46:52.032587 2300 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:46:52.032860 kubelet[2300]: I0812 23:46:52.032834 2300 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:46:52.079361 kubelet[2300]: E0812 23:46:52.079328 2300 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:46:52.080823 kubelet[2300]: I0812 23:46:52.080720 2300 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:46:52.085630 kubelet[2300]: I0812 23:46:52.085614 2300 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 12 23:46:52.088407 kubelet[2300]: I0812 23:46:52.088386 2300 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:46:52.088617 kubelet[2300]: I0812 23:46:52.088596 2300 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:46:52.088810 kubelet[2300]: I0812 23:46:52.088620 2300 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:46:52.088895 kubelet[2300]: I0812 23:46:52.088884 2300 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:46:52.088895 kubelet[2300]: I0812 23:46:52.088893 2300 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:46:52.089091 kubelet[2300]: I0812 23:46:52.089065 2300 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:46:52.093304 kubelet[2300]: I0812 23:46:52.093285 2300 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:46:52.093360 kubelet[2300]: I0812 23:46:52.093309 2300 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:46:52.094237 kubelet[2300]: I0812 23:46:52.093968 2300 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:46:52.094237 kubelet[2300]: I0812 23:46:52.093989 2300 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:46:52.095975 kubelet[2300]: W0812 23:46:52.095925 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Aug 12 23:46:52.096033 kubelet[2300]: E0812 23:46:52.095982 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:46:52.099762 kubelet[2300]: W0812 23:46:52.099726 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Aug 12 23:46:52.099855 kubelet[2300]: E0812 23:46:52.099770 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:46:52.099855 kubelet[2300]: I0812 23:46:52.099834 2300 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 12 23:46:52.101057 kubelet[2300]: I0812 23:46:52.101030 2300 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:46:52.101202 kubelet[2300]: W0812 23:46:52.101188 2300 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 12 23:46:52.102095 kubelet[2300]: I0812 23:46:52.102035 2300 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:46:52.102095 kubelet[2300]: I0812 23:46:52.102066 2300 server.go:1287] "Started kubelet" Aug 12 23:46:52.103245 kubelet[2300]: I0812 23:46:52.103213 2300 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:46:52.104189 kubelet[2300]: I0812 23:46:52.104139 2300 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:46:52.104592 kubelet[2300]: I0812 23:46:52.104574 2300 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:46:52.104971 kubelet[2300]: I0812 23:46:52.104952 2300 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:46:52.105708 kubelet[2300]: I0812 23:46:52.105689 2300 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:46:52.105962 kubelet[2300]: I0812 23:46:52.105944 2300 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:46:52.107650 kubelet[2300]: I0812 23:46:52.107605 2300 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:46:52.108542 kubelet[2300]: E0812 23:46:52.107858 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:46:52.108542 kubelet[2300]: I0812 23:46:52.108347 2300 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:46:52.108542 kubelet[2300]: I0812 23:46:52.108388 2300 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:46:52.109072 kubelet[2300]: W0812 23:46:52.109018 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Aug 12 23:46:52.109120 kubelet[2300]: E0812 23:46:52.109069 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:46:52.110018 kubelet[2300]: E0812 23:46:52.109974 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="200ms" Aug 12 23:46:52.110979 kubelet[2300]: I0812 23:46:52.110401 2300 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:46:52.113947 kubelet[2300]: I0812 23:46:52.112287 2300 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:46:52.113947 kubelet[2300]: I0812 23:46:52.112488 2300 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:46:52.113947 kubelet[2300]: E0812 23:46:52.112660 2300 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:46:52.114568 kubelet[2300]: E0812 23:46:52.114338 2300 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.67:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.67:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185b29c6731662fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-12 23:46:52.102050557 +0000 UTC m=+0.705805921,LastTimestamp:2025-08-12 23:46:52.102050557 +0000 UTC m=+0.705805921,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 12 23:46:52.118825 kubelet[2300]: I0812 23:46:52.118803 2300 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:46:52.118917 kubelet[2300]: I0812 23:46:52.118906 2300 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:46:52.118973 kubelet[2300]: I0812 23:46:52.118965 2300 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:46:52.122874 kubelet[2300]: I0812 23:46:52.122816 2300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:46:52.123756 kubelet[2300]: I0812 23:46:52.123717 2300 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:46:52.123756 kubelet[2300]: I0812 23:46:52.123737 2300 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:46:52.123756 kubelet[2300]: I0812 23:46:52.123758 2300 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:46:52.123756 kubelet[2300]: I0812 23:46:52.123764 2300 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:46:52.123883 kubelet[2300]: E0812 23:46:52.123800 2300 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:46:52.127274 kubelet[2300]: W0812 23:46:52.127236 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Aug 12 23:46:52.127379 kubelet[2300]: E0812 23:46:52.127273 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:46:52.208275 kubelet[2300]: E0812 23:46:52.208227 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:46:52.221203 kubelet[2300]: I0812 23:46:52.221161 2300 policy_none.go:49] "None policy: Start" Aug 12 23:46:52.221203 kubelet[2300]: I0812 23:46:52.221196 2300 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:46:52.221203 kubelet[2300]: I0812 23:46:52.221209 2300 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:46:52.224104 kubelet[2300]: E0812 23:46:52.224041 2300 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 12 23:46:52.227575 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 12 23:46:52.240874 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 12 23:46:52.244101 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 12 23:46:52.263754 kubelet[2300]: I0812 23:46:52.263728 2300 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:46:52.263929 kubelet[2300]: I0812 23:46:52.263900 2300 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:46:52.263962 kubelet[2300]: I0812 23:46:52.263917 2300 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:46:52.264532 kubelet[2300]: I0812 23:46:52.264182 2300 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:46:52.265059 kubelet[2300]: E0812 23:46:52.265027 2300 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:46:52.265178 kubelet[2300]: E0812 23:46:52.265068 2300 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 12 23:46:52.310705 kubelet[2300]: E0812 23:46:52.310638 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="400ms" Aug 12 23:46:52.365789 kubelet[2300]: I0812 23:46:52.365724 2300 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:46:52.366135 kubelet[2300]: E0812 23:46:52.366108 2300 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Aug 12 23:46:52.431405 systemd[1]: Created slice kubepods-burstable-pod79749cacde7ea2593e2106d1ae11f35e.slice - libcontainer container kubepods-burstable-pod79749cacde7ea2593e2106d1ae11f35e.slice. Aug 12 23:46:52.454818 kubelet[2300]: E0812 23:46:52.454781 2300 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:46:52.458129 systemd[1]: Created slice kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice - libcontainer container kubepods-burstable-pod393e2c0a78c0056780c2194ff80c6df1.slice. Aug 12 23:46:52.460033 kubelet[2300]: E0812 23:46:52.460013 2300 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:46:52.461965 systemd[1]: Created slice kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice - libcontainer container kubepods-burstable-pod750d39fc02542d706e018e4727e23919.slice. Aug 12 23:46:52.463467 kubelet[2300]: E0812 23:46:52.463430 2300 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:46:52.510798 kubelet[2300]: I0812 23:46:52.510772 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:46:52.510843 kubelet[2300]: I0812 23:46:52.510804 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79749cacde7ea2593e2106d1ae11f35e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"79749cacde7ea2593e2106d1ae11f35e\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:46:52.510843 kubelet[2300]: I0812 23:46:52.510823 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:52.510843 kubelet[2300]: I0812 23:46:52.510839 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:52.510843 kubelet[2300]: I0812 23:46:52.510854 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:52.510843 kubelet[2300]: I0812 23:46:52.510878 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79749cacde7ea2593e2106d1ae11f35e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"79749cacde7ea2593e2106d1ae11f35e\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:46:52.511133 kubelet[2300]: I0812 23:46:52.510894 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79749cacde7ea2593e2106d1ae11f35e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"79749cacde7ea2593e2106d1ae11f35e\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:46:52.511133 kubelet[2300]: I0812 23:46:52.510908 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:52.511133 kubelet[2300]: I0812 23:46:52.510930 2300 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:52.567937 kubelet[2300]: I0812 23:46:52.567814 2300 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:46:52.568193 kubelet[2300]: E0812 23:46:52.568146 2300 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Aug 12 23:46:52.712974 kubelet[2300]: E0812 23:46:52.712929 2300 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.67:6443: connect: connection refused" interval="800ms" Aug 12 23:46:52.756770 containerd[1536]: time="2025-08-12T23:46:52.756729917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:79749cacde7ea2593e2106d1ae11f35e,Namespace:kube-system,Attempt:0,}" Aug 12 23:46:52.761403 containerd[1536]: time="2025-08-12T23:46:52.761345717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,}" Aug 12 23:46:52.765099 containerd[1536]: time="2025-08-12T23:46:52.765044237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,}" Aug 12 23:46:52.779794 containerd[1536]: time="2025-08-12T23:46:52.779756557Z" level=info msg="connecting to shim d426bf5bb419e6f609f6cdff02ec030d7749e73da35a30778e7f5666a41bd221" address="unix:///run/containerd/s/58c3b955f2fc719ecf9b76607cc2108cb04d97efa43d61fa2f7dee7d31143372" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:46:52.791794 containerd[1536]: time="2025-08-12T23:46:52.791691557Z" level=info msg="connecting to shim 60f6a273f352fefec1fae8d07a08e3b08075c5e4fc196dbc5f4c5f827f50b354" address="unix:///run/containerd/s/8bea7a810b195bb8675130161e7fb008f8492ad0d1d9e9370dbf9492ef3f1d9d" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:46:52.795276 containerd[1536]: time="2025-08-12T23:46:52.795189437Z" level=info msg="connecting to shim db42ea26e8368bb7393d15217c36e047cda704754664fe371aba20c20917523f" address="unix:///run/containerd/s/26b8fb8eb34f4bf79cdf079c25c280683e2b72d3c5043d03c7c5d210d0589c99" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:46:52.811264 systemd[1]: Started cri-containerd-d426bf5bb419e6f609f6cdff02ec030d7749e73da35a30778e7f5666a41bd221.scope - libcontainer container d426bf5bb419e6f609f6cdff02ec030d7749e73da35a30778e7f5666a41bd221. Aug 12 23:46:52.815040 systemd[1]: Started cri-containerd-60f6a273f352fefec1fae8d07a08e3b08075c5e4fc196dbc5f4c5f827f50b354.scope - libcontainer container 60f6a273f352fefec1fae8d07a08e3b08075c5e4fc196dbc5f4c5f827f50b354. Aug 12 23:46:52.828240 systemd[1]: Started cri-containerd-db42ea26e8368bb7393d15217c36e047cda704754664fe371aba20c20917523f.scope - libcontainer container db42ea26e8368bb7393d15217c36e047cda704754664fe371aba20c20917523f. Aug 12 23:46:52.862068 containerd[1536]: time="2025-08-12T23:46:52.862019797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:393e2c0a78c0056780c2194ff80c6df1,Namespace:kube-system,Attempt:0,} returns sandbox id \"60f6a273f352fefec1fae8d07a08e3b08075c5e4fc196dbc5f4c5f827f50b354\"" Aug 12 23:46:52.864899 containerd[1536]: time="2025-08-12T23:46:52.864847597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:79749cacde7ea2593e2106d1ae11f35e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d426bf5bb419e6f609f6cdff02ec030d7749e73da35a30778e7f5666a41bd221\"" Aug 12 23:46:52.866560 containerd[1536]: time="2025-08-12T23:46:52.866506317Z" level=info msg="CreateContainer within sandbox \"60f6a273f352fefec1fae8d07a08e3b08075c5e4fc196dbc5f4c5f827f50b354\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 12 23:46:52.868043 containerd[1536]: time="2025-08-12T23:46:52.867984837Z" level=info msg="CreateContainer within sandbox \"d426bf5bb419e6f609f6cdff02ec030d7749e73da35a30778e7f5666a41bd221\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 12 23:46:52.873568 containerd[1536]: time="2025-08-12T23:46:52.873537117Z" level=info msg="Container 0c541f2bb86765d199a9dd3974246786dd7610d24c396100c1736175b38daedd: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:46:52.874462 containerd[1536]: time="2025-08-12T23:46:52.874398477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:750d39fc02542d706e018e4727e23919,Namespace:kube-system,Attempt:0,} returns sandbox id \"db42ea26e8368bb7393d15217c36e047cda704754664fe371aba20c20917523f\"" Aug 12 23:46:52.876756 containerd[1536]: time="2025-08-12T23:46:52.876663077Z" level=info msg="CreateContainer within sandbox \"db42ea26e8368bb7393d15217c36e047cda704754664fe371aba20c20917523f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 12 23:46:52.881175 containerd[1536]: time="2025-08-12T23:46:52.881141717Z" level=info msg="CreateContainer within sandbox \"60f6a273f352fefec1fae8d07a08e3b08075c5e4fc196dbc5f4c5f827f50b354\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0c541f2bb86765d199a9dd3974246786dd7610d24c396100c1736175b38daedd\"" Aug 12 23:46:52.881821 containerd[1536]: time="2025-08-12T23:46:52.881794637Z" level=info msg="StartContainer for \"0c541f2bb86765d199a9dd3974246786dd7610d24c396100c1736175b38daedd\"" Aug 12 23:46:52.882937 containerd[1536]: time="2025-08-12T23:46:52.882857677Z" level=info msg="connecting to shim 0c541f2bb86765d199a9dd3974246786dd7610d24c396100c1736175b38daedd" address="unix:///run/containerd/s/8bea7a810b195bb8675130161e7fb008f8492ad0d1d9e9370dbf9492ef3f1d9d" protocol=ttrpc version=3 Aug 12 23:46:52.884025 containerd[1536]: time="2025-08-12T23:46:52.883941157Z" level=info msg="Container d550f73b0755c1382f800e269565f965136d07d44d588233b04becec898bdb5e: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:46:52.888066 containerd[1536]: time="2025-08-12T23:46:52.888038237Z" level=info msg="Container 1027145928af721dc1379be636e2909e22dc1e4cdde56a91a440561a136ae0f3: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:46:52.892098 containerd[1536]: time="2025-08-12T23:46:52.892046917Z" level=info msg="CreateContainer within sandbox \"d426bf5bb419e6f609f6cdff02ec030d7749e73da35a30778e7f5666a41bd221\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d550f73b0755c1382f800e269565f965136d07d44d588233b04becec898bdb5e\"" Aug 12 23:46:52.892701 containerd[1536]: time="2025-08-12T23:46:52.892675077Z" level=info msg="StartContainer for \"d550f73b0755c1382f800e269565f965136d07d44d588233b04becec898bdb5e\"" Aug 12 23:46:52.893919 containerd[1536]: time="2025-08-12T23:46:52.893889157Z" level=info msg="connecting to shim d550f73b0755c1382f800e269565f965136d07d44d588233b04becec898bdb5e" address="unix:///run/containerd/s/58c3b955f2fc719ecf9b76607cc2108cb04d97efa43d61fa2f7dee7d31143372" protocol=ttrpc version=3 Aug 12 23:46:52.900100 containerd[1536]: time="2025-08-12T23:46:52.899002477Z" level=info msg="CreateContainer within sandbox \"db42ea26e8368bb7393d15217c36e047cda704754664fe371aba20c20917523f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1027145928af721dc1379be636e2909e22dc1e4cdde56a91a440561a136ae0f3\"" Aug 12 23:46:52.900100 containerd[1536]: time="2025-08-12T23:46:52.900026997Z" level=info msg="StartContainer for \"1027145928af721dc1379be636e2909e22dc1e4cdde56a91a440561a136ae0f3\"" Aug 12 23:46:52.901576 containerd[1536]: time="2025-08-12T23:46:52.901544837Z" level=info msg="connecting to shim 1027145928af721dc1379be636e2909e22dc1e4cdde56a91a440561a136ae0f3" address="unix:///run/containerd/s/26b8fb8eb34f4bf79cdf079c25c280683e2b72d3c5043d03c7c5d210d0589c99" protocol=ttrpc version=3 Aug 12 23:46:52.905274 systemd[1]: Started cri-containerd-0c541f2bb86765d199a9dd3974246786dd7610d24c396100c1736175b38daedd.scope - libcontainer container 0c541f2bb86765d199a9dd3974246786dd7610d24c396100c1736175b38daedd. Aug 12 23:46:52.908983 systemd[1]: Started cri-containerd-d550f73b0755c1382f800e269565f965136d07d44d588233b04becec898bdb5e.scope - libcontainer container d550f73b0755c1382f800e269565f965136d07d44d588233b04becec898bdb5e. Aug 12 23:46:52.926232 systemd[1]: Started cri-containerd-1027145928af721dc1379be636e2909e22dc1e4cdde56a91a440561a136ae0f3.scope - libcontainer container 1027145928af721dc1379be636e2909e22dc1e4cdde56a91a440561a136ae0f3. Aug 12 23:46:52.966012 containerd[1536]: time="2025-08-12T23:46:52.965880677Z" level=info msg="StartContainer for \"0c541f2bb86765d199a9dd3974246786dd7610d24c396100c1736175b38daedd\" returns successfully" Aug 12 23:46:52.971701 kubelet[2300]: I0812 23:46:52.971415 2300 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:46:52.971827 kubelet[2300]: E0812 23:46:52.971791 2300 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.67:6443/api/v1/nodes\": dial tcp 10.0.0.67:6443: connect: connection refused" node="localhost" Aug 12 23:46:52.975371 containerd[1536]: time="2025-08-12T23:46:52.975297277Z" level=info msg="StartContainer for \"d550f73b0755c1382f800e269565f965136d07d44d588233b04becec898bdb5e\" returns successfully" Aug 12 23:46:52.989793 containerd[1536]: time="2025-08-12T23:46:52.989755957Z" level=info msg="StartContainer for \"1027145928af721dc1379be636e2909e22dc1e4cdde56a91a440561a136ae0f3\" returns successfully" Aug 12 23:46:53.031895 kubelet[2300]: W0812 23:46:53.027264 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Aug 12 23:46:53.031895 kubelet[2300]: E0812 23:46:53.027323 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:46:53.119719 kubelet[2300]: W0812 23:46:53.119527 2300 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.67:6443: connect: connection refused Aug 12 23:46:53.119719 kubelet[2300]: E0812 23:46:53.119596 2300 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.67:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.67:6443: connect: connection refused" logger="UnhandledError" Aug 12 23:46:53.138107 kubelet[2300]: E0812 23:46:53.136951 2300 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:46:53.138107 kubelet[2300]: E0812 23:46:53.136989 2300 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:46:53.139653 kubelet[2300]: E0812 23:46:53.139631 2300 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:46:53.773365 kubelet[2300]: I0812 23:46:53.773331 2300 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:46:54.141978 kubelet[2300]: E0812 23:46:54.141953 2300 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:46:54.142194 kubelet[2300]: E0812 23:46:54.142174 2300 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 12 23:46:54.884032 kubelet[2300]: E0812 23:46:54.883982 2300 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 12 23:46:54.952304 kubelet[2300]: I0812 23:46:54.952255 2300 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 12 23:46:54.952304 kubelet[2300]: E0812 23:46:54.952303 2300 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Aug 12 23:46:54.963217 kubelet[2300]: E0812 23:46:54.963178 2300 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:46:55.097501 kubelet[2300]: I0812 23:46:55.097455 2300 apiserver.go:52] "Watching apiserver" Aug 12 23:46:55.108691 kubelet[2300]: I0812 23:46:55.108651 2300 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:46:55.108802 kubelet[2300]: I0812 23:46:55.108694 2300 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:46:55.114014 kubelet[2300]: E0812 23:46:55.113962 2300 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 12 23:46:55.114014 kubelet[2300]: I0812 23:46:55.113989 2300 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:55.115669 kubelet[2300]: E0812 23:46:55.115638 2300 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:55.115669 kubelet[2300]: I0812 23:46:55.115665 2300 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:46:55.117267 kubelet[2300]: E0812 23:46:55.117237 2300 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 12 23:46:55.835944 kubelet[2300]: I0812 23:46:55.835914 2300 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:56.848602 systemd[1]: Reload requested from client PID 2575 ('systemctl') (unit session-7.scope)... Aug 12 23:46:56.848617 systemd[1]: Reloading... Aug 12 23:46:56.904160 zram_generator::config[2620]: No configuration found. Aug 12 23:46:56.975869 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 12 23:46:57.073185 systemd[1]: Reloading finished in 224 ms. Aug 12 23:46:57.099393 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:46:57.111527 systemd[1]: kubelet.service: Deactivated successfully. Aug 12 23:46:57.111735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:46:57.111780 systemd[1]: kubelet.service: Consumed 1.108s CPU time, 128.7M memory peak. Aug 12 23:46:57.114151 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 12 23:46:57.250184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 12 23:46:57.260411 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 12 23:46:57.300703 kubelet[2659]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:46:57.300703 kubelet[2659]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 12 23:46:57.300703 kubelet[2659]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 12 23:46:57.301039 kubelet[2659]: I0812 23:46:57.300826 2659 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 12 23:46:57.306869 kubelet[2659]: I0812 23:46:57.306829 2659 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 12 23:46:57.306869 kubelet[2659]: I0812 23:46:57.306860 2659 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 12 23:46:57.307154 kubelet[2659]: I0812 23:46:57.307131 2659 server.go:954] "Client rotation is on, will bootstrap in background" Aug 12 23:46:57.308342 kubelet[2659]: I0812 23:46:57.308316 2659 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 12 23:46:57.311275 kubelet[2659]: I0812 23:46:57.311244 2659 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 12 23:46:57.314484 kubelet[2659]: I0812 23:46:57.314463 2659 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 12 23:46:57.317693 kubelet[2659]: I0812 23:46:57.317669 2659 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 12 23:46:57.317889 kubelet[2659]: I0812 23:46:57.317867 2659 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 12 23:46:57.318039 kubelet[2659]: I0812 23:46:57.317892 2659 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 12 23:46:57.318131 kubelet[2659]: I0812 23:46:57.318048 2659 topology_manager.go:138] "Creating topology manager with none policy" Aug 12 23:46:57.318131 kubelet[2659]: I0812 23:46:57.318057 2659 container_manager_linux.go:304] "Creating device plugin manager" Aug 12 23:46:57.318131 kubelet[2659]: I0812 23:46:57.318124 2659 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:46:57.318267 kubelet[2659]: I0812 23:46:57.318257 2659 kubelet.go:446] "Attempting to sync node with API server" Aug 12 23:46:57.318298 kubelet[2659]: I0812 23:46:57.318271 2659 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 12 23:46:57.318298 kubelet[2659]: I0812 23:46:57.318288 2659 kubelet.go:352] "Adding apiserver pod source" Aug 12 23:46:57.318905 kubelet[2659]: I0812 23:46:57.318300 2659 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 12 23:46:57.319154 kubelet[2659]: I0812 23:46:57.319129 2659 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 12 23:46:57.319680 kubelet[2659]: I0812 23:46:57.319658 2659 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 12 23:46:57.321350 kubelet[2659]: I0812 23:46:57.320217 2659 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 12 23:46:57.321350 kubelet[2659]: I0812 23:46:57.320266 2659 server.go:1287] "Started kubelet" Aug 12 23:46:57.321350 kubelet[2659]: I0812 23:46:57.320546 2659 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 12 23:46:57.321350 kubelet[2659]: I0812 23:46:57.320802 2659 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 12 23:46:57.321350 kubelet[2659]: I0812 23:46:57.320856 2659 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 12 23:46:57.322056 kubelet[2659]: I0812 23:46:57.321546 2659 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 12 23:46:57.322056 kubelet[2659]: I0812 23:46:57.321720 2659 server.go:479] "Adding debug handlers to kubelet server" Aug 12 23:46:57.324401 kubelet[2659]: I0812 23:46:57.322343 2659 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 12 23:46:57.324401 kubelet[2659]: E0812 23:46:57.322836 2659 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 12 23:46:57.324401 kubelet[2659]: I0812 23:46:57.322872 2659 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 12 23:46:57.324401 kubelet[2659]: I0812 23:46:57.323000 2659 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 12 23:46:57.324401 kubelet[2659]: I0812 23:46:57.323155 2659 reconciler.go:26] "Reconciler: start to sync state" Aug 12 23:46:57.324401 kubelet[2659]: I0812 23:46:57.324149 2659 factory.go:221] Registration of the systemd container factory successfully Aug 12 23:46:57.324401 kubelet[2659]: I0812 23:46:57.324261 2659 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 12 23:46:57.325331 kubelet[2659]: I0812 23:46:57.325310 2659 factory.go:221] Registration of the containerd container factory successfully Aug 12 23:46:57.332494 kubelet[2659]: E0812 23:46:57.331689 2659 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 12 23:46:57.336442 kubelet[2659]: I0812 23:46:57.336400 2659 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 12 23:46:57.337216 kubelet[2659]: I0812 23:46:57.337197 2659 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 12 23:46:57.337255 kubelet[2659]: I0812 23:46:57.337219 2659 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 12 23:46:57.337255 kubelet[2659]: I0812 23:46:57.337236 2659 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 12 23:46:57.337255 kubelet[2659]: I0812 23:46:57.337243 2659 kubelet.go:2382] "Starting kubelet main sync loop" Aug 12 23:46:57.337313 kubelet[2659]: E0812 23:46:57.337280 2659 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 12 23:46:57.374129 kubelet[2659]: I0812 23:46:57.374029 2659 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 12 23:46:57.374129 kubelet[2659]: I0812 23:46:57.374056 2659 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 12 23:46:57.374129 kubelet[2659]: I0812 23:46:57.374076 2659 state_mem.go:36] "Initialized new in-memory state store" Aug 12 23:46:57.374263 kubelet[2659]: I0812 23:46:57.374238 2659 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 12 23:46:57.374263 kubelet[2659]: I0812 23:46:57.374248 2659 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 12 23:46:57.374263 kubelet[2659]: I0812 23:46:57.374265 2659 policy_none.go:49] "None policy: Start" Aug 12 23:46:57.374318 kubelet[2659]: I0812 23:46:57.374274 2659 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 12 23:46:57.374318 kubelet[2659]: I0812 23:46:57.374283 2659 state_mem.go:35] "Initializing new in-memory state store" Aug 12 23:46:57.374397 kubelet[2659]: I0812 23:46:57.374380 2659 state_mem.go:75] "Updated machine memory state" Aug 12 23:46:57.378217 kubelet[2659]: I0812 23:46:57.378190 2659 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 12 23:46:57.378371 kubelet[2659]: I0812 23:46:57.378348 2659 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 12 23:46:57.378406 kubelet[2659]: I0812 23:46:57.378366 2659 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 12 23:46:57.379060 kubelet[2659]: I0812 23:46:57.379025 2659 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 12 23:46:57.379840 kubelet[2659]: E0812 23:46:57.379634 2659 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 12 23:46:57.438673 kubelet[2659]: I0812 23:46:57.438619 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:46:57.439224 kubelet[2659]: I0812 23:46:57.439143 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:57.439902 kubelet[2659]: I0812 23:46:57.439452 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:46:57.445123 kubelet[2659]: E0812 23:46:57.444643 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:57.479918 kubelet[2659]: I0812 23:46:57.479891 2659 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 12 23:46:57.485562 kubelet[2659]: I0812 23:46:57.485412 2659 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 12 23:46:57.485562 kubelet[2659]: I0812 23:46:57.485498 2659 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 12 23:46:57.524774 kubelet[2659]: I0812 23:46:57.524657 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79749cacde7ea2593e2106d1ae11f35e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"79749cacde7ea2593e2106d1ae11f35e\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:46:57.524774 kubelet[2659]: I0812 23:46:57.524703 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79749cacde7ea2593e2106d1ae11f35e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"79749cacde7ea2593e2106d1ae11f35e\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:46:57.524774 kubelet[2659]: I0812 23:46:57.524724 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:57.524774 kubelet[2659]: I0812 23:46:57.524742 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:57.524774 kubelet[2659]: I0812 23:46:57.524757 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79749cacde7ea2593e2106d1ae11f35e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"79749cacde7ea2593e2106d1ae11f35e\") " pod="kube-system/kube-apiserver-localhost" Aug 12 23:46:57.525049 kubelet[2659]: I0812 23:46:57.524772 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:57.525049 kubelet[2659]: I0812 23:46:57.524787 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:57.525049 kubelet[2659]: I0812 23:46:57.524803 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/393e2c0a78c0056780c2194ff80c6df1-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"393e2c0a78c0056780c2194ff80c6df1\") " pod="kube-system/kube-controller-manager-localhost" Aug 12 23:46:57.525049 kubelet[2659]: I0812 23:46:57.524821 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/750d39fc02542d706e018e4727e23919-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"750d39fc02542d706e018e4727e23919\") " pod="kube-system/kube-scheduler-localhost" Aug 12 23:46:58.319535 kubelet[2659]: I0812 23:46:58.319492 2659 apiserver.go:52] "Watching apiserver" Aug 12 23:46:58.323196 kubelet[2659]: I0812 23:46:58.323169 2659 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 12 23:46:58.364110 kubelet[2659]: I0812 23:46:58.363992 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 12 23:46:58.364789 kubelet[2659]: I0812 23:46:58.364119 2659 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 12 23:46:58.370500 kubelet[2659]: E0812 23:46:58.370376 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 12 23:46:58.370621 kubelet[2659]: E0812 23:46:58.370592 2659 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Aug 12 23:46:58.385368 kubelet[2659]: I0812 23:46:58.385263 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3852493350000001 podStartE2EDuration="1.385249335s" podCreationTimestamp="2025-08-12 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:46:58.381201245 +0000 UTC m=+1.116730769" watchObservedRunningTime="2025-08-12 23:46:58.385249335 +0000 UTC m=+1.120778859" Aug 12 23:46:58.398815 kubelet[2659]: I0812 23:46:58.398769 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.398753528 podStartE2EDuration="1.398753528s" podCreationTimestamp="2025-08-12 23:46:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:46:58.398714808 +0000 UTC m=+1.134244332" watchObservedRunningTime="2025-08-12 23:46:58.398753528 +0000 UTC m=+1.134283012" Aug 12 23:46:58.399607 kubelet[2659]: I0812 23:46:58.398852 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.398846449 podStartE2EDuration="3.398846449s" podCreationTimestamp="2025-08-12 23:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:46:58.391679631 +0000 UTC m=+1.127209115" watchObservedRunningTime="2025-08-12 23:46:58.398846449 +0000 UTC m=+1.134375933" Aug 12 23:47:03.422956 kubelet[2659]: I0812 23:47:03.422926 2659 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 12 23:47:03.423906 containerd[1536]: time="2025-08-12T23:47:03.423861969Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 12 23:47:03.424204 kubelet[2659]: I0812 23:47:03.424171 2659 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 12 23:47:04.357911 systemd[1]: Created slice kubepods-besteffort-pod86aafca9_31fd_486b_a4a1_ba4346870fc6.slice - libcontainer container kubepods-besteffort-pod86aafca9_31fd_486b_a4a1_ba4346870fc6.slice. Aug 12 23:47:04.374219 kubelet[2659]: I0812 23:47:04.374152 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86aafca9-31fd-486b-a4a1-ba4346870fc6-lib-modules\") pod \"kube-proxy-vjkx9\" (UID: \"86aafca9-31fd-486b-a4a1-ba4346870fc6\") " pod="kube-system/kube-proxy-vjkx9" Aug 12 23:47:04.374461 kubelet[2659]: I0812 23:47:04.374236 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86aafca9-31fd-486b-a4a1-ba4346870fc6-kube-proxy\") pod \"kube-proxy-vjkx9\" (UID: \"86aafca9-31fd-486b-a4a1-ba4346870fc6\") " pod="kube-system/kube-proxy-vjkx9" Aug 12 23:47:04.374461 kubelet[2659]: I0812 23:47:04.374285 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86aafca9-31fd-486b-a4a1-ba4346870fc6-xtables-lock\") pod \"kube-proxy-vjkx9\" (UID: \"86aafca9-31fd-486b-a4a1-ba4346870fc6\") " pod="kube-system/kube-proxy-vjkx9" Aug 12 23:47:04.374461 kubelet[2659]: I0812 23:47:04.374355 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b954l\" (UniqueName: \"kubernetes.io/projected/86aafca9-31fd-486b-a4a1-ba4346870fc6-kube-api-access-b954l\") pod \"kube-proxy-vjkx9\" (UID: \"86aafca9-31fd-486b-a4a1-ba4346870fc6\") " pod="kube-system/kube-proxy-vjkx9" Aug 12 23:47:04.522339 systemd[1]: Created slice kubepods-besteffort-pod2b6ce404_9b8a_43f2_bf5d_948439af9ba5.slice - libcontainer container kubepods-besteffort-pod2b6ce404_9b8a_43f2_bf5d_948439af9ba5.slice. Aug 12 23:47:04.575975 kubelet[2659]: I0812 23:47:04.575894 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2b6ce404-9b8a-43f2-bf5d-948439af9ba5-var-lib-calico\") pod \"tigera-operator-747864d56d-vzrjq\" (UID: \"2b6ce404-9b8a-43f2-bf5d-948439af9ba5\") " pod="tigera-operator/tigera-operator-747864d56d-vzrjq" Aug 12 23:47:04.575975 kubelet[2659]: I0812 23:47:04.575938 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvk48\" (UniqueName: \"kubernetes.io/projected/2b6ce404-9b8a-43f2-bf5d-948439af9ba5-kube-api-access-dvk48\") pod \"tigera-operator-747864d56d-vzrjq\" (UID: \"2b6ce404-9b8a-43f2-bf5d-948439af9ba5\") " pod="tigera-operator/tigera-operator-747864d56d-vzrjq" Aug 12 23:47:04.676721 containerd[1536]: time="2025-08-12T23:47:04.676674272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vjkx9,Uid:86aafca9-31fd-486b-a4a1-ba4346870fc6,Namespace:kube-system,Attempt:0,}" Aug 12 23:47:04.697522 containerd[1536]: time="2025-08-12T23:47:04.697477187Z" level=info msg="connecting to shim 4e24c2aeb74f0da09dbdd7eaf70d749ad8ee9b2d3dbfd5ce8f0aaeab6fbde050" address="unix:///run/containerd/s/90358309c7a9c8ca089e24043b92f5d2adfde6a189f44e60399d9ecbc211c6c7" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:04.719253 systemd[1]: Started cri-containerd-4e24c2aeb74f0da09dbdd7eaf70d749ad8ee9b2d3dbfd5ce8f0aaeab6fbde050.scope - libcontainer container 4e24c2aeb74f0da09dbdd7eaf70d749ad8ee9b2d3dbfd5ce8f0aaeab6fbde050. Aug 12 23:47:04.739585 containerd[1536]: time="2025-08-12T23:47:04.739546777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vjkx9,Uid:86aafca9-31fd-486b-a4a1-ba4346870fc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e24c2aeb74f0da09dbdd7eaf70d749ad8ee9b2d3dbfd5ce8f0aaeab6fbde050\"" Aug 12 23:47:04.745553 containerd[1536]: time="2025-08-12T23:47:04.745512826Z" level=info msg="CreateContainer within sandbox \"4e24c2aeb74f0da09dbdd7eaf70d749ad8ee9b2d3dbfd5ce8f0aaeab6fbde050\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 12 23:47:04.755269 containerd[1536]: time="2025-08-12T23:47:04.754228521Z" level=info msg="Container 92b433b75196bdb72889b8176cf7aac36b0a0902459ae126fe6b71ea3d9f6aab: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:04.762883 containerd[1536]: time="2025-08-12T23:47:04.762843015Z" level=info msg="CreateContainer within sandbox \"4e24c2aeb74f0da09dbdd7eaf70d749ad8ee9b2d3dbfd5ce8f0aaeab6fbde050\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"92b433b75196bdb72889b8176cf7aac36b0a0902459ae126fe6b71ea3d9f6aab\"" Aug 12 23:47:04.763370 containerd[1536]: time="2025-08-12T23:47:04.763344336Z" level=info msg="StartContainer for \"92b433b75196bdb72889b8176cf7aac36b0a0902459ae126fe6b71ea3d9f6aab\"" Aug 12 23:47:04.764676 containerd[1536]: time="2025-08-12T23:47:04.764628538Z" level=info msg="connecting to shim 92b433b75196bdb72889b8176cf7aac36b0a0902459ae126fe6b71ea3d9f6aab" address="unix:///run/containerd/s/90358309c7a9c8ca089e24043b92f5d2adfde6a189f44e60399d9ecbc211c6c7" protocol=ttrpc version=3 Aug 12 23:47:04.793260 systemd[1]: Started cri-containerd-92b433b75196bdb72889b8176cf7aac36b0a0902459ae126fe6b71ea3d9f6aab.scope - libcontainer container 92b433b75196bdb72889b8176cf7aac36b0a0902459ae126fe6b71ea3d9f6aab. Aug 12 23:47:04.830615 containerd[1536]: time="2025-08-12T23:47:04.830502087Z" level=info msg="StartContainer for \"92b433b75196bdb72889b8176cf7aac36b0a0902459ae126fe6b71ea3d9f6aab\" returns successfully" Aug 12 23:47:04.830781 containerd[1536]: time="2025-08-12T23:47:04.830742088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-vzrjq,Uid:2b6ce404-9b8a-43f2-bf5d-948439af9ba5,Namespace:tigera-operator,Attempt:0,}" Aug 12 23:47:04.856033 containerd[1536]: time="2025-08-12T23:47:04.855729049Z" level=info msg="connecting to shim 2c3faf7dae00ab65bdbde65bd5d23c3c2a663635c9c2a9c65c787950af76dcda" address="unix:///run/containerd/s/882281c4443f7c3ff92c666e1d14459192935386c9f1e80be413f4e3a81673c9" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:04.897243 systemd[1]: Started cri-containerd-2c3faf7dae00ab65bdbde65bd5d23c3c2a663635c9c2a9c65c787950af76dcda.scope - libcontainer container 2c3faf7dae00ab65bdbde65bd5d23c3c2a663635c9c2a9c65c787950af76dcda. Aug 12 23:47:04.938861 containerd[1536]: time="2025-08-12T23:47:04.938617387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-vzrjq,Uid:2b6ce404-9b8a-43f2-bf5d-948439af9ba5,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2c3faf7dae00ab65bdbde65bd5d23c3c2a663635c9c2a9c65c787950af76dcda\"" Aug 12 23:47:04.942852 containerd[1536]: time="2025-08-12T23:47:04.942361073Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 12 23:47:05.395886 kubelet[2659]: I0812 23:47:05.395817 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vjkx9" podStartSLOduration=1.395801264 podStartE2EDuration="1.395801264s" podCreationTimestamp="2025-08-12 23:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:47:05.38642221 +0000 UTC m=+8.121951734" watchObservedRunningTime="2025-08-12 23:47:05.395801264 +0000 UTC m=+8.131330788" Aug 12 23:47:05.492676 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4084030902.mount: Deactivated successfully. Aug 12 23:47:06.116346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3318609811.mount: Deactivated successfully. Aug 12 23:47:07.102377 containerd[1536]: time="2025-08-12T23:47:07.102327723Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:07.103121 containerd[1536]: time="2025-08-12T23:47:07.103067924Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Aug 12 23:47:07.104110 containerd[1536]: time="2025-08-12T23:47:07.104004525Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:07.105847 containerd[1536]: time="2025-08-12T23:47:07.105792287Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:07.106778 containerd[1536]: time="2025-08-12T23:47:07.106548528Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 2.164150415s" Aug 12 23:47:07.106778 containerd[1536]: time="2025-08-12T23:47:07.106577168Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Aug 12 23:47:07.111268 containerd[1536]: time="2025-08-12T23:47:07.111233095Z" level=info msg="CreateContainer within sandbox \"2c3faf7dae00ab65bdbde65bd5d23c3c2a663635c9c2a9c65c787950af76dcda\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 12 23:47:07.118017 containerd[1536]: time="2025-08-12T23:47:07.117452783Z" level=info msg="Container 34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:07.122627 containerd[1536]: time="2025-08-12T23:47:07.122573350Z" level=info msg="CreateContainer within sandbox \"2c3faf7dae00ab65bdbde65bd5d23c3c2a663635c9c2a9c65c787950af76dcda\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585\"" Aug 12 23:47:07.123535 containerd[1536]: time="2025-08-12T23:47:07.123143351Z" level=info msg="StartContainer for \"34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585\"" Aug 12 23:47:07.124396 containerd[1536]: time="2025-08-12T23:47:07.124364953Z" level=info msg="connecting to shim 34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585" address="unix:///run/containerd/s/882281c4443f7c3ff92c666e1d14459192935386c9f1e80be413f4e3a81673c9" protocol=ttrpc version=3 Aug 12 23:47:07.146259 systemd[1]: Started cri-containerd-34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585.scope - libcontainer container 34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585. Aug 12 23:47:07.185802 containerd[1536]: time="2025-08-12T23:47:07.184528435Z" level=info msg="StartContainer for \"34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585\" returns successfully" Aug 12 23:47:08.002474 kubelet[2659]: I0812 23:47:08.002403 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-vzrjq" podStartSLOduration=1.8337790520000001 podStartE2EDuration="4.002385633s" podCreationTimestamp="2025-08-12 23:47:04 +0000 UTC" firstStartedPulling="2025-08-12 23:47:04.940891391 +0000 UTC m=+7.676420915" lastFinishedPulling="2025-08-12 23:47:07.109431692 +0000 UTC m=+9.845027496" observedRunningTime="2025-08-12 23:47:07.394260202 +0000 UTC m=+10.129789726" watchObservedRunningTime="2025-08-12 23:47:08.002385633 +0000 UTC m=+10.737915117" Aug 12 23:47:09.223781 systemd[1]: cri-containerd-34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585.scope: Deactivated successfully. Aug 12 23:47:09.224444 systemd[1]: cri-containerd-34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585.scope: Consumed 458ms CPU time, 36M memory peak, 1.5M read from disk. Aug 12 23:47:09.273110 containerd[1536]: time="2025-08-12T23:47:09.272893920Z" level=info msg="received exit event container_id:\"34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585\" id:\"34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585\" pid:2978 exit_status:1 exited_at:{seconds:1755042429 nanos:256722940}" Aug 12 23:47:09.274859 containerd[1536]: time="2025-08-12T23:47:09.274833882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585\" id:\"34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585\" pid:2978 exit_status:1 exited_at:{seconds:1755042429 nanos:256722940}" Aug 12 23:47:09.340630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585-rootfs.mount: Deactivated successfully. Aug 12 23:47:10.412868 kubelet[2659]: I0812 23:47:10.412829 2659 scope.go:117] "RemoveContainer" containerID="34a99c65bec95eab5d765d379f6ecede75ff54efaf5c9c6b66d86112c76d7585" Aug 12 23:47:10.420407 containerd[1536]: time="2025-08-12T23:47:10.420029346Z" level=info msg="CreateContainer within sandbox \"2c3faf7dae00ab65bdbde65bd5d23c3c2a663635c9c2a9c65c787950af76dcda\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Aug 12 23:47:10.433346 containerd[1536]: time="2025-08-12T23:47:10.433301641Z" level=info msg="Container 2332aead874ae0e2859d49c0e383430da4c56136d70aece02873c01a54318f6d: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:10.435048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4000139510.mount: Deactivated successfully. Aug 12 23:47:10.450574 containerd[1536]: time="2025-08-12T23:47:10.450531141Z" level=info msg="CreateContainer within sandbox \"2c3faf7dae00ab65bdbde65bd5d23c3c2a663635c9c2a9c65c787950af76dcda\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2332aead874ae0e2859d49c0e383430da4c56136d70aece02873c01a54318f6d\"" Aug 12 23:47:10.451324 containerd[1536]: time="2025-08-12T23:47:10.451300702Z" level=info msg="StartContainer for \"2332aead874ae0e2859d49c0e383430da4c56136d70aece02873c01a54318f6d\"" Aug 12 23:47:10.453710 containerd[1536]: time="2025-08-12T23:47:10.453682064Z" level=info msg="connecting to shim 2332aead874ae0e2859d49c0e383430da4c56136d70aece02873c01a54318f6d" address="unix:///run/containerd/s/882281c4443f7c3ff92c666e1d14459192935386c9f1e80be413f4e3a81673c9" protocol=ttrpc version=3 Aug 12 23:47:10.484259 systemd[1]: Started cri-containerd-2332aead874ae0e2859d49c0e383430da4c56136d70aece02873c01a54318f6d.scope - libcontainer container 2332aead874ae0e2859d49c0e383430da4c56136d70aece02873c01a54318f6d. Aug 12 23:47:10.537086 containerd[1536]: time="2025-08-12T23:47:10.537029518Z" level=info msg="StartContainer for \"2332aead874ae0e2859d49c0e383430da4c56136d70aece02873c01a54318f6d\" returns successfully" Aug 12 23:47:11.079591 update_engine[1526]: I20250812 23:47:11.079521 1526 update_attempter.cc:509] Updating boot flags... Aug 12 23:47:12.369821 sudo[1739]: pam_unix(sudo:session): session closed for user root Aug 12 23:47:12.373489 sshd[1738]: Connection closed by 10.0.0.1 port 50728 Aug 12 23:47:12.373812 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:12.377178 systemd[1]: sshd@6-10.0.0.67:22-10.0.0.1:50728.service: Deactivated successfully. Aug 12 23:47:12.380397 systemd[1]: session-7.scope: Deactivated successfully. Aug 12 23:47:12.380578 systemd[1]: session-7.scope: Consumed 7.213s CPU time, 216.7M memory peak. Aug 12 23:47:12.381575 systemd-logind[1520]: Session 7 logged out. Waiting for processes to exit. Aug 12 23:47:12.384847 systemd-logind[1520]: Removed session 7. Aug 12 23:47:19.469232 systemd[1]: Created slice kubepods-besteffort-pod1416ae71_cb4a_4466_ab47_2864d92aa80e.slice - libcontainer container kubepods-besteffort-pod1416ae71_cb4a_4466_ab47_2864d92aa80e.slice. Aug 12 23:47:19.475919 kubelet[2659]: I0812 23:47:19.475881 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnxn8\" (UniqueName: \"kubernetes.io/projected/1416ae71-cb4a-4466-ab47-2864d92aa80e-kube-api-access-rnxn8\") pod \"calico-typha-5867596949-cqwlv\" (UID: \"1416ae71-cb4a-4466-ab47-2864d92aa80e\") " pod="calico-system/calico-typha-5867596949-cqwlv" Aug 12 23:47:19.477741 kubelet[2659]: I0812 23:47:19.476370 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1416ae71-cb4a-4466-ab47-2864d92aa80e-tigera-ca-bundle\") pod \"calico-typha-5867596949-cqwlv\" (UID: \"1416ae71-cb4a-4466-ab47-2864d92aa80e\") " pod="calico-system/calico-typha-5867596949-cqwlv" Aug 12 23:47:19.477741 kubelet[2659]: I0812 23:47:19.476420 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1416ae71-cb4a-4466-ab47-2864d92aa80e-typha-certs\") pod \"calico-typha-5867596949-cqwlv\" (UID: \"1416ae71-cb4a-4466-ab47-2864d92aa80e\") " pod="calico-system/calico-typha-5867596949-cqwlv" Aug 12 23:47:19.774346 containerd[1536]: time="2025-08-12T23:47:19.774221862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5867596949-cqwlv,Uid:1416ae71-cb4a-4466-ab47-2864d92aa80e,Namespace:calico-system,Attempt:0,}" Aug 12 23:47:19.797137 systemd[1]: Created slice kubepods-besteffort-podea3a5680_0ba9_4286_b83b_fe7db0c6c92d.slice - libcontainer container kubepods-besteffort-podea3a5680_0ba9_4286_b83b_fe7db0c6c92d.slice. Aug 12 23:47:19.807412 containerd[1536]: time="2025-08-12T23:47:19.807356483Z" level=info msg="connecting to shim 79c28b67e5622c3cda0655cc1648d8e35485bbf8ddcab7f458ecf26884601a70" address="unix:///run/containerd/s/74f8b285f37909ac76980ab7f544cde6f6fd8038e0a7b5ad56cbd87403f6e42a" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:19.842318 systemd[1]: Started cri-containerd-79c28b67e5622c3cda0655cc1648d8e35485bbf8ddcab7f458ecf26884601a70.scope - libcontainer container 79c28b67e5622c3cda0655cc1648d8e35485bbf8ddcab7f458ecf26884601a70. Aug 12 23:47:19.880017 kubelet[2659]: I0812 23:47:19.879859 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-flexvol-driver-host\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.880017 kubelet[2659]: I0812 23:47:19.879904 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-cni-net-dir\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.880017 kubelet[2659]: I0812 23:47:19.879926 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-tigera-ca-bundle\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.880017 kubelet[2659]: I0812 23:47:19.879941 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-xtables-lock\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.880017 kubelet[2659]: I0812 23:47:19.879958 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-lib-modules\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.880543 kubelet[2659]: I0812 23:47:19.879973 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-cni-log-dir\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.880543 kubelet[2659]: I0812 23:47:19.879987 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-var-lib-calico\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.880543 kubelet[2659]: I0812 23:47:19.880016 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-var-run-calico\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.880543 kubelet[2659]: I0812 23:47:19.880036 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-policysync\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.880543 kubelet[2659]: I0812 23:47:19.880050 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sjld\" (UniqueName: \"kubernetes.io/projected/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-kube-api-access-9sjld\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.880822 kubelet[2659]: I0812 23:47:19.880068 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-node-certs\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.880822 kubelet[2659]: I0812 23:47:19.880098 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ea3a5680-0ba9-4286-b83b-fe7db0c6c92d-cni-bin-dir\") pod \"calico-node-x48hw\" (UID: \"ea3a5680-0ba9-4286-b83b-fe7db0c6c92d\") " pod="calico-system/calico-node-x48hw" Aug 12 23:47:19.888528 containerd[1536]: time="2025-08-12T23:47:19.888489974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5867596949-cqwlv,Uid:1416ae71-cb4a-4466-ab47-2864d92aa80e,Namespace:calico-system,Attempt:0,} returns sandbox id \"79c28b67e5622c3cda0655cc1648d8e35485bbf8ddcab7f458ecf26884601a70\"" Aug 12 23:47:19.896923 containerd[1536]: time="2025-08-12T23:47:19.896883100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 12 23:47:19.988971 kubelet[2659]: E0812 23:47:19.988933 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:19.988971 kubelet[2659]: W0812 23:47:19.988959 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:19.989119 kubelet[2659]: E0812 23:47:19.988993 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:19.993891 kubelet[2659]: E0812 23:47:19.993815 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:19.993891 kubelet[2659]: W0812 23:47:19.993838 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:19.993891 kubelet[2659]: E0812 23:47:19.993877 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.063814 kubelet[2659]: E0812 23:47:20.063237 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pptww" podUID="f0d096b2-e96c-4424-850b-b88da8b049a1" Aug 12 23:47:20.073918 kubelet[2659]: E0812 23:47:20.073883 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.073918 kubelet[2659]: W0812 23:47:20.073910 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.074814 kubelet[2659]: E0812 23:47:20.073930 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.074814 kubelet[2659]: E0812 23:47:20.074126 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.074814 kubelet[2659]: W0812 23:47:20.074136 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.074814 kubelet[2659]: E0812 23:47:20.074200 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.074814 kubelet[2659]: E0812 23:47:20.074349 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.074814 kubelet[2659]: W0812 23:47:20.074357 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.074814 kubelet[2659]: E0812 23:47:20.074365 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.074814 kubelet[2659]: E0812 23:47:20.074578 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.074814 kubelet[2659]: W0812 23:47:20.074588 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.074814 kubelet[2659]: E0812 23:47:20.074595 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.075501 kubelet[2659]: E0812 23:47:20.074810 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.075501 kubelet[2659]: W0812 23:47:20.074819 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.075501 kubelet[2659]: E0812 23:47:20.074828 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.075501 kubelet[2659]: E0812 23:47:20.075331 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.075501 kubelet[2659]: W0812 23:47:20.075344 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.075501 kubelet[2659]: E0812 23:47:20.075446 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.075964 kubelet[2659]: E0812 23:47:20.075837 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.075964 kubelet[2659]: W0812 23:47:20.075849 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.075964 kubelet[2659]: E0812 23:47:20.075860 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.076570 kubelet[2659]: E0812 23:47:20.076151 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.076570 kubelet[2659]: W0812 23:47:20.076161 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.076570 kubelet[2659]: E0812 23:47:20.076173 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.076643 kubelet[2659]: E0812 23:47:20.076619 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.076643 kubelet[2659]: W0812 23:47:20.076630 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.076678 kubelet[2659]: E0812 23:47:20.076657 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.076859 kubelet[2659]: E0812 23:47:20.076841 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.076859 kubelet[2659]: W0812 23:47:20.076854 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.077048 kubelet[2659]: E0812 23:47:20.076863 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.077048 kubelet[2659]: E0812 23:47:20.077127 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.077048 kubelet[2659]: W0812 23:47:20.077137 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.077048 kubelet[2659]: E0812 23:47:20.077146 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.077048 kubelet[2659]: E0812 23:47:20.077326 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.077048 kubelet[2659]: W0812 23:47:20.077335 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.077048 kubelet[2659]: E0812 23:47:20.077344 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.077048 kubelet[2659]: E0812 23:47:20.077504 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.077048 kubelet[2659]: W0812 23:47:20.077513 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.077048 kubelet[2659]: E0812 23:47:20.077538 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.077981 kubelet[2659]: E0812 23:47:20.077692 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.077981 kubelet[2659]: W0812 23:47:20.077702 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.077981 kubelet[2659]: E0812 23:47:20.077710 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.077981 kubelet[2659]: E0812 23:47:20.077850 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.077981 kubelet[2659]: W0812 23:47:20.077858 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.077981 kubelet[2659]: E0812 23:47:20.077866 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.078146 kubelet[2659]: E0812 23:47:20.077987 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.078146 kubelet[2659]: W0812 23:47:20.077995 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.078146 kubelet[2659]: E0812 23:47:20.078002 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.078219 kubelet[2659]: E0812 23:47:20.078146 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.078219 kubelet[2659]: W0812 23:47:20.078155 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.078219 kubelet[2659]: E0812 23:47:20.078163 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.078814 kubelet[2659]: E0812 23:47:20.078788 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.078814 kubelet[2659]: W0812 23:47:20.078806 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.078978 kubelet[2659]: E0812 23:47:20.078821 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.079014 kubelet[2659]: E0812 23:47:20.079001 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.079014 kubelet[2659]: W0812 23:47:20.079013 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.079056 kubelet[2659]: E0812 23:47:20.079023 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.079215 kubelet[2659]: E0812 23:47:20.079202 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.079215 kubelet[2659]: W0812 23:47:20.079214 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.079273 kubelet[2659]: E0812 23:47:20.079221 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.081683 kubelet[2659]: E0812 23:47:20.081658 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.081683 kubelet[2659]: W0812 23:47:20.081674 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.081753 kubelet[2659]: E0812 23:47:20.081688 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.081753 kubelet[2659]: I0812 23:47:20.081715 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f0d096b2-e96c-4424-850b-b88da8b049a1-registration-dir\") pod \"csi-node-driver-pptww\" (UID: \"f0d096b2-e96c-4424-850b-b88da8b049a1\") " pod="calico-system/csi-node-driver-pptww" Aug 12 23:47:20.082182 kubelet[2659]: E0812 23:47:20.082161 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.082182 kubelet[2659]: W0812 23:47:20.082177 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.082255 kubelet[2659]: E0812 23:47:20.082193 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.082255 kubelet[2659]: I0812 23:47:20.082213 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f0d096b2-e96c-4424-850b-b88da8b049a1-varrun\") pod \"csi-node-driver-pptww\" (UID: \"f0d096b2-e96c-4424-850b-b88da8b049a1\") " pod="calico-system/csi-node-driver-pptww" Aug 12 23:47:20.082419 kubelet[2659]: E0812 23:47:20.082405 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.082419 kubelet[2659]: W0812 23:47:20.082418 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.082480 kubelet[2659]: E0812 23:47:20.082433 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.082503 kubelet[2659]: I0812 23:47:20.082479 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zjbx\" (UniqueName: \"kubernetes.io/projected/f0d096b2-e96c-4424-850b-b88da8b049a1-kube-api-access-4zjbx\") pod \"csi-node-driver-pptww\" (UID: \"f0d096b2-e96c-4424-850b-b88da8b049a1\") " pod="calico-system/csi-node-driver-pptww" Aug 12 23:47:20.082665 kubelet[2659]: E0812 23:47:20.082651 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.082665 kubelet[2659]: W0812 23:47:20.082663 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.082723 kubelet[2659]: E0812 23:47:20.082677 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.082723 kubelet[2659]: I0812 23:47:20.082695 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f0d096b2-e96c-4424-850b-b88da8b049a1-socket-dir\") pod \"csi-node-driver-pptww\" (UID: \"f0d096b2-e96c-4424-850b-b88da8b049a1\") " pod="calico-system/csi-node-driver-pptww" Aug 12 23:47:20.082891 kubelet[2659]: E0812 23:47:20.082879 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.082891 kubelet[2659]: W0812 23:47:20.082890 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.082942 kubelet[2659]: E0812 23:47:20.082904 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.082942 kubelet[2659]: I0812 23:47:20.082920 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0d096b2-e96c-4424-850b-b88da8b049a1-kubelet-dir\") pod \"csi-node-driver-pptww\" (UID: \"f0d096b2-e96c-4424-850b-b88da8b049a1\") " pod="calico-system/csi-node-driver-pptww" Aug 12 23:47:20.083210 kubelet[2659]: E0812 23:47:20.083168 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.083210 kubelet[2659]: W0812 23:47:20.083188 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.084089 kubelet[2659]: E0812 23:47:20.083845 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.084184 kubelet[2659]: E0812 23:47:20.084165 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.084184 kubelet[2659]: W0812 23:47:20.084180 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.084303 kubelet[2659]: E0812 23:47:20.084277 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.084524 kubelet[2659]: E0812 23:47:20.084504 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.084574 kubelet[2659]: W0812 23:47:20.084536 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.084596 kubelet[2659]: E0812 23:47:20.084570 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.084899 kubelet[2659]: E0812 23:47:20.084803 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.084899 kubelet[2659]: W0812 23:47:20.084897 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.085030 kubelet[2659]: E0812 23:47:20.084992 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.085131 kubelet[2659]: E0812 23:47:20.085115 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.085131 kubelet[2659]: W0812 23:47:20.085128 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.085331 kubelet[2659]: E0812 23:47:20.085311 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.085527 kubelet[2659]: E0812 23:47:20.085464 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.085527 kubelet[2659]: W0812 23:47:20.085477 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.085598 kubelet[2659]: E0812 23:47:20.085534 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.086095 kubelet[2659]: E0812 23:47:20.085651 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.086095 kubelet[2659]: W0812 23:47:20.085663 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.086095 kubelet[2659]: E0812 23:47:20.085672 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.086095 kubelet[2659]: E0812 23:47:20.085851 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.086095 kubelet[2659]: W0812 23:47:20.085860 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.086095 kubelet[2659]: E0812 23:47:20.085868 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.086095 kubelet[2659]: E0812 23:47:20.086036 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.086095 kubelet[2659]: W0812 23:47:20.086046 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.086095 kubelet[2659]: E0812 23:47:20.086054 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.086348 kubelet[2659]: E0812 23:47:20.086326 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.086348 kubelet[2659]: W0812 23:47:20.086342 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.086410 kubelet[2659]: E0812 23:47:20.086353 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.102066 containerd[1536]: time="2025-08-12T23:47:20.102015545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x48hw,Uid:ea3a5680-0ba9-4286-b83b-fe7db0c6c92d,Namespace:calico-system,Attempt:0,}" Aug 12 23:47:20.118881 containerd[1536]: time="2025-08-12T23:47:20.118687555Z" level=info msg="connecting to shim c450bb067dbe8be739bf5ae16efce46eca5de9e963d073875273199ecff70bcb" address="unix:///run/containerd/s/70ea41b22ca2dd8eec5d344ce9b400a88871657b9f8298179bdfb7e55eaac518" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:20.143301 systemd[1]: Started cri-containerd-c450bb067dbe8be739bf5ae16efce46eca5de9e963d073875273199ecff70bcb.scope - libcontainer container c450bb067dbe8be739bf5ae16efce46eca5de9e963d073875273199ecff70bcb. Aug 12 23:47:20.169346 containerd[1536]: time="2025-08-12T23:47:20.169295865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-x48hw,Uid:ea3a5680-0ba9-4286-b83b-fe7db0c6c92d,Namespace:calico-system,Attempt:0,} returns sandbox id \"c450bb067dbe8be739bf5ae16efce46eca5de9e963d073875273199ecff70bcb\"" Aug 12 23:47:20.183549 kubelet[2659]: E0812 23:47:20.183518 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.183549 kubelet[2659]: W0812 23:47:20.183539 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.183549 kubelet[2659]: E0812 23:47:20.183557 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.183980 kubelet[2659]: E0812 23:47:20.183747 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.183980 kubelet[2659]: W0812 23:47:20.183760 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.183980 kubelet[2659]: E0812 23:47:20.183776 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.183980 kubelet[2659]: E0812 23:47:20.183918 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.183980 kubelet[2659]: W0812 23:47:20.183926 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.183980 kubelet[2659]: E0812 23:47:20.183941 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.183980 kubelet[2659]: E0812 23:47:20.184135 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.183980 kubelet[2659]: W0812 23:47:20.184147 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.185638 kubelet[2659]: E0812 23:47:20.184163 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.185638 kubelet[2659]: E0812 23:47:20.184392 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.185638 kubelet[2659]: W0812 23:47:20.184404 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.185638 kubelet[2659]: E0812 23:47:20.184437 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.185638 kubelet[2659]: E0812 23:47:20.184644 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.185638 kubelet[2659]: W0812 23:47:20.184655 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.185638 kubelet[2659]: E0812 23:47:20.184670 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.185638 kubelet[2659]: E0812 23:47:20.184794 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.185638 kubelet[2659]: W0812 23:47:20.184801 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.185638 kubelet[2659]: E0812 23:47:20.184847 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.186549 kubelet[2659]: E0812 23:47:20.184987 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.186549 kubelet[2659]: W0812 23:47:20.184997 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.186549 kubelet[2659]: E0812 23:47:20.185041 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.186549 kubelet[2659]: E0812 23:47:20.185144 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.186549 kubelet[2659]: W0812 23:47:20.185153 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.186549 kubelet[2659]: E0812 23:47:20.185195 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.186549 kubelet[2659]: E0812 23:47:20.185281 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.186549 kubelet[2659]: W0812 23:47:20.185288 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.186549 kubelet[2659]: E0812 23:47:20.185321 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.186549 kubelet[2659]: E0812 23:47:20.185414 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.188584 kubelet[2659]: W0812 23:47:20.185422 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.188584 kubelet[2659]: E0812 23:47:20.185467 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.188584 kubelet[2659]: E0812 23:47:20.185545 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.188584 kubelet[2659]: W0812 23:47:20.185552 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.188584 kubelet[2659]: E0812 23:47:20.185562 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.188584 kubelet[2659]: E0812 23:47:20.185749 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.188584 kubelet[2659]: W0812 23:47:20.185758 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.188584 kubelet[2659]: E0812 23:47:20.185768 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.188584 kubelet[2659]: E0812 23:47:20.185962 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.188584 kubelet[2659]: W0812 23:47:20.185971 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.188771 kubelet[2659]: E0812 23:47:20.185981 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.188771 kubelet[2659]: E0812 23:47:20.186119 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.188771 kubelet[2659]: W0812 23:47:20.186128 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.188771 kubelet[2659]: E0812 23:47:20.186141 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.188771 kubelet[2659]: E0812 23:47:20.186296 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.188771 kubelet[2659]: W0812 23:47:20.186304 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.188771 kubelet[2659]: E0812 23:47:20.186313 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.188771 kubelet[2659]: E0812 23:47:20.186582 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.188771 kubelet[2659]: W0812 23:47:20.186591 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.188771 kubelet[2659]: E0812 23:47:20.186605 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.188946 kubelet[2659]: E0812 23:47:20.186825 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.188946 kubelet[2659]: W0812 23:47:20.186834 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.188946 kubelet[2659]: E0812 23:47:20.186878 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.188946 kubelet[2659]: E0812 23:47:20.187002 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.188946 kubelet[2659]: W0812 23:47:20.187012 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.188946 kubelet[2659]: E0812 23:47:20.187097 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.188946 kubelet[2659]: E0812 23:47:20.187258 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.188946 kubelet[2659]: W0812 23:47:20.187268 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.188946 kubelet[2659]: E0812 23:47:20.187356 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.188946 kubelet[2659]: E0812 23:47:20.187826 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.189131 kubelet[2659]: W0812 23:47:20.187838 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.189131 kubelet[2659]: E0812 23:47:20.187874 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.189131 kubelet[2659]: E0812 23:47:20.188177 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.189131 kubelet[2659]: W0812 23:47:20.188191 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.189131 kubelet[2659]: E0812 23:47:20.188210 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.189401 kubelet[2659]: E0812 23:47:20.189339 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.189401 kubelet[2659]: W0812 23:47:20.189358 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.189401 kubelet[2659]: E0812 23:47:20.189380 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.189584 kubelet[2659]: E0812 23:47:20.189565 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.189584 kubelet[2659]: W0812 23:47:20.189584 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.189651 kubelet[2659]: E0812 23:47:20.189594 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.190316 kubelet[2659]: E0812 23:47:20.190282 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.190316 kubelet[2659]: W0812 23:47:20.190300 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.190316 kubelet[2659]: E0812 23:47:20.190316 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:20.201645 kubelet[2659]: E0812 23:47:20.201522 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:20.201645 kubelet[2659]: W0812 23:47:20.201543 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:20.201645 kubelet[2659]: E0812 23:47:20.201562 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:21.047489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362553540.mount: Deactivated successfully. Aug 12 23:47:21.567445 containerd[1536]: time="2025-08-12T23:47:21.567385510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:21.567838 containerd[1536]: time="2025-08-12T23:47:21.567800670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Aug 12 23:47:21.568595 containerd[1536]: time="2025-08-12T23:47:21.568557270Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:21.570628 containerd[1536]: time="2025-08-12T23:47:21.570593031Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:21.571247 containerd[1536]: time="2025-08-12T23:47:21.571221752Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.674293732s" Aug 12 23:47:21.571279 containerd[1536]: time="2025-08-12T23:47:21.571252232Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Aug 12 23:47:21.572471 containerd[1536]: time="2025-08-12T23:47:21.572115512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 12 23:47:21.587139 containerd[1536]: time="2025-08-12T23:47:21.587073401Z" level=info msg="CreateContainer within sandbox \"79c28b67e5622c3cda0655cc1648d8e35485bbf8ddcab7f458ecf26884601a70\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 12 23:47:21.595811 containerd[1536]: time="2025-08-12T23:47:21.594112685Z" level=info msg="Container cc9f6091e9c95b9959d23b1423c6bb87256fd8418505a802988efb1711f91c0e: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:21.601415 containerd[1536]: time="2025-08-12T23:47:21.601358969Z" level=info msg="CreateContainer within sandbox \"79c28b67e5622c3cda0655cc1648d8e35485bbf8ddcab7f458ecf26884601a70\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"cc9f6091e9c95b9959d23b1423c6bb87256fd8418505a802988efb1711f91c0e\"" Aug 12 23:47:21.601837 containerd[1536]: time="2025-08-12T23:47:21.601814209Z" level=info msg="StartContainer for \"cc9f6091e9c95b9959d23b1423c6bb87256fd8418505a802988efb1711f91c0e\"" Aug 12 23:47:21.603097 containerd[1536]: time="2025-08-12T23:47:21.602872369Z" level=info msg="connecting to shim cc9f6091e9c95b9959d23b1423c6bb87256fd8418505a802988efb1711f91c0e" address="unix:///run/containerd/s/74f8b285f37909ac76980ab7f544cde6f6fd8038e0a7b5ad56cbd87403f6e42a" protocol=ttrpc version=3 Aug 12 23:47:21.626298 systemd[1]: Started cri-containerd-cc9f6091e9c95b9959d23b1423c6bb87256fd8418505a802988efb1711f91c0e.scope - libcontainer container cc9f6091e9c95b9959d23b1423c6bb87256fd8418505a802988efb1711f91c0e. Aug 12 23:47:21.668178 containerd[1536]: time="2025-08-12T23:47:21.668063125Z" level=info msg="StartContainer for \"cc9f6091e9c95b9959d23b1423c6bb87256fd8418505a802988efb1711f91c0e\" returns successfully" Aug 12 23:47:22.338423 kubelet[2659]: E0812 23:47:22.338367 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pptww" podUID="f0d096b2-e96c-4424-850b-b88da8b049a1" Aug 12 23:47:22.462067 kubelet[2659]: I0812 23:47:22.462000 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5867596949-cqwlv" podStartSLOduration=1.783309654 podStartE2EDuration="3.461966669s" podCreationTimestamp="2025-08-12 23:47:19 +0000 UTC" firstStartedPulling="2025-08-12 23:47:19.893319617 +0000 UTC m=+22.628849101" lastFinishedPulling="2025-08-12 23:47:21.571976592 +0000 UTC m=+24.307506116" observedRunningTime="2025-08-12 23:47:22.461014469 +0000 UTC m=+25.196543993" watchObservedRunningTime="2025-08-12 23:47:22.461966669 +0000 UTC m=+25.197496193" Aug 12 23:47:22.492101 kubelet[2659]: E0812 23:47:22.492054 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.492101 kubelet[2659]: W0812 23:47:22.492090 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.492101 kubelet[2659]: E0812 23:47:22.492111 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.492304 kubelet[2659]: E0812 23:47:22.492291 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.492348 kubelet[2659]: W0812 23:47:22.492303 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.492374 kubelet[2659]: E0812 23:47:22.492349 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.492536 kubelet[2659]: E0812 23:47:22.492524 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.492536 kubelet[2659]: W0812 23:47:22.492534 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.492592 kubelet[2659]: E0812 23:47:22.492543 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.492679 kubelet[2659]: E0812 23:47:22.492670 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.492711 kubelet[2659]: W0812 23:47:22.492680 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.492711 kubelet[2659]: E0812 23:47:22.492688 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.492830 kubelet[2659]: E0812 23:47:22.492821 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.492860 kubelet[2659]: W0812 23:47:22.492831 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.492860 kubelet[2659]: E0812 23:47:22.492839 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.492963 kubelet[2659]: E0812 23:47:22.492954 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.492992 kubelet[2659]: W0812 23:47:22.492963 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.492992 kubelet[2659]: E0812 23:47:22.492970 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.493105 kubelet[2659]: E0812 23:47:22.493095 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.493139 kubelet[2659]: W0812 23:47:22.493104 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.493139 kubelet[2659]: E0812 23:47:22.493112 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.493242 kubelet[2659]: E0812 23:47:22.493233 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.493272 kubelet[2659]: W0812 23:47:22.493242 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.493272 kubelet[2659]: E0812 23:47:22.493250 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.493414 kubelet[2659]: E0812 23:47:22.493404 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.493414 kubelet[2659]: W0812 23:47:22.493413 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.493476 kubelet[2659]: E0812 23:47:22.493421 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.493555 kubelet[2659]: E0812 23:47:22.493546 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.493591 kubelet[2659]: W0812 23:47:22.493556 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.493591 kubelet[2659]: E0812 23:47:22.493563 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.493677 kubelet[2659]: E0812 23:47:22.493669 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.493703 kubelet[2659]: W0812 23:47:22.493678 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.493703 kubelet[2659]: E0812 23:47:22.493685 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.493807 kubelet[2659]: E0812 23:47:22.493798 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.493839 kubelet[2659]: W0812 23:47:22.493808 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.493839 kubelet[2659]: E0812 23:47:22.493815 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.493940 kubelet[2659]: E0812 23:47:22.493932 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.493967 kubelet[2659]: W0812 23:47:22.493941 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.493967 kubelet[2659]: E0812 23:47:22.493947 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.494062 kubelet[2659]: E0812 23:47:22.494053 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.494104 kubelet[2659]: W0812 23:47:22.494062 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.494104 kubelet[2659]: E0812 23:47:22.494069 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.494195 kubelet[2659]: E0812 23:47:22.494187 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.494223 kubelet[2659]: W0812 23:47:22.494195 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.494223 kubelet[2659]: E0812 23:47:22.494203 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.498700 kubelet[2659]: E0812 23:47:22.498623 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.498700 kubelet[2659]: W0812 23:47:22.498641 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.498700 kubelet[2659]: E0812 23:47:22.498654 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.498869 kubelet[2659]: E0812 23:47:22.498847 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.498869 kubelet[2659]: W0812 23:47:22.498863 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.498927 kubelet[2659]: E0812 23:47:22.498879 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.499042 kubelet[2659]: E0812 23:47:22.499021 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.499042 kubelet[2659]: W0812 23:47:22.499031 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.499126 kubelet[2659]: E0812 23:47:22.499043 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.499238 kubelet[2659]: E0812 23:47:22.499223 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.499238 kubelet[2659]: W0812 23:47:22.499232 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.499288 kubelet[2659]: E0812 23:47:22.499245 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.499397 kubelet[2659]: E0812 23:47:22.499385 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.499429 kubelet[2659]: W0812 23:47:22.499397 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.499429 kubelet[2659]: E0812 23:47:22.499412 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.499652 kubelet[2659]: E0812 23:47:22.499525 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.499652 kubelet[2659]: W0812 23:47:22.499536 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.499652 kubelet[2659]: E0812 23:47:22.499544 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.499814 kubelet[2659]: E0812 23:47:22.499793 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.499875 kubelet[2659]: W0812 23:47:22.499862 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.499948 kubelet[2659]: E0812 23:47:22.499934 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.500156 kubelet[2659]: E0812 23:47:22.500143 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.500230 kubelet[2659]: W0812 23:47:22.500218 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.500292 kubelet[2659]: E0812 23:47:22.500281 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.500498 kubelet[2659]: E0812 23:47:22.500484 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.500573 kubelet[2659]: W0812 23:47:22.500559 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.500647 kubelet[2659]: E0812 23:47:22.500627 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.500921 kubelet[2659]: E0812 23:47:22.500801 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.500921 kubelet[2659]: W0812 23:47:22.500814 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.500921 kubelet[2659]: E0812 23:47:22.500834 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.501071 kubelet[2659]: E0812 23:47:22.501058 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.501156 kubelet[2659]: W0812 23:47:22.501143 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.501231 kubelet[2659]: E0812 23:47:22.501213 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.501630 kubelet[2659]: E0812 23:47:22.501414 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.501630 kubelet[2659]: W0812 23:47:22.501427 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.501630 kubelet[2659]: E0812 23:47:22.501443 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.501630 kubelet[2659]: E0812 23:47:22.501616 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.501630 kubelet[2659]: W0812 23:47:22.501627 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.501630 kubelet[2659]: E0812 23:47:22.501641 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.501791 kubelet[2659]: E0812 23:47:22.501760 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.501791 kubelet[2659]: W0812 23:47:22.501768 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.501791 kubelet[2659]: E0812 23:47:22.501775 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.501926 kubelet[2659]: E0812 23:47:22.501906 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.501926 kubelet[2659]: W0812 23:47:22.501917 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.501926 kubelet[2659]: E0812 23:47:22.501926 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.504279 kubelet[2659]: E0812 23:47:22.504196 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.504279 kubelet[2659]: W0812 23:47:22.504214 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.504279 kubelet[2659]: E0812 23:47:22.504240 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.504465 kubelet[2659]: E0812 23:47:22.504428 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.504465 kubelet[2659]: W0812 23:47:22.504446 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.504465 kubelet[2659]: E0812 23:47:22.504461 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.504648 kubelet[2659]: E0812 23:47:22.504633 2659 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 12 23:47:22.504648 kubelet[2659]: W0812 23:47:22.504645 2659 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 12 23:47:22.504693 kubelet[2659]: E0812 23:47:22.504653 2659 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 12 23:47:22.903247 containerd[1536]: time="2025-08-12T23:47:22.903195098Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:22.904644 containerd[1536]: time="2025-08-12T23:47:22.904068579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Aug 12 23:47:22.905959 containerd[1536]: time="2025-08-12T23:47:22.905925300Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:22.914839 containerd[1536]: time="2025-08-12T23:47:22.914793464Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:22.915335 containerd[1536]: time="2025-08-12T23:47:22.915302385Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.343148713s" Aug 12 23:47:22.915369 containerd[1536]: time="2025-08-12T23:47:22.915342705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Aug 12 23:47:22.917871 containerd[1536]: time="2025-08-12T23:47:22.917836226Z" level=info msg="CreateContainer within sandbox \"c450bb067dbe8be739bf5ae16efce46eca5de9e963d073875273199ecff70bcb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 12 23:47:22.926626 containerd[1536]: time="2025-08-12T23:47:22.926562270Z" level=info msg="Container 28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:22.935177 containerd[1536]: time="2025-08-12T23:47:22.935128795Z" level=info msg="CreateContainer within sandbox \"c450bb067dbe8be739bf5ae16efce46eca5de9e963d073875273199ecff70bcb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c\"" Aug 12 23:47:22.937136 containerd[1536]: time="2025-08-12T23:47:22.936118995Z" level=info msg="StartContainer for \"28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c\"" Aug 12 23:47:22.938964 containerd[1536]: time="2025-08-12T23:47:22.938934597Z" level=info msg="connecting to shim 28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c" address="unix:///run/containerd/s/70ea41b22ca2dd8eec5d344ce9b400a88871657b9f8298179bdfb7e55eaac518" protocol=ttrpc version=3 Aug 12 23:47:22.965332 systemd[1]: Started cri-containerd-28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c.scope - libcontainer container 28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c. Aug 12 23:47:23.019276 containerd[1536]: time="2025-08-12T23:47:23.019229598Z" level=info msg="StartContainer for \"28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c\" returns successfully" Aug 12 23:47:23.029072 systemd[1]: cri-containerd-28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c.scope: Deactivated successfully. Aug 12 23:47:23.031241 containerd[1536]: time="2025-08-12T23:47:23.031155404Z" level=info msg="TaskExit event in podsandbox handler container_id:\"28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c\" id:\"28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c\" pid:3403 exited_at:{seconds:1755042443 nanos:30750084}" Aug 12 23:47:23.040923 containerd[1536]: time="2025-08-12T23:47:23.040760328Z" level=info msg="received exit event container_id:\"28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c\" id:\"28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c\" pid:3403 exited_at:{seconds:1755042443 nanos:30750084}" Aug 12 23:47:23.063125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28721eca790d66122f7bc3cd83d10485f1975fd31050876386eb3bf32519d99c-rootfs.mount: Deactivated successfully. Aug 12 23:47:23.452107 containerd[1536]: time="2025-08-12T23:47:23.451481368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 12 23:47:23.453707 kubelet[2659]: I0812 23:47:23.453522 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:47:24.338402 kubelet[2659]: E0812 23:47:24.338327 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pptww" podUID="f0d096b2-e96c-4424-850b-b88da8b049a1" Aug 12 23:47:25.569907 containerd[1536]: time="2025-08-12T23:47:25.569855856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:25.570974 containerd[1536]: time="2025-08-12T23:47:25.570867536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Aug 12 23:47:25.571837 containerd[1536]: time="2025-08-12T23:47:25.571803896Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:25.573550 containerd[1536]: time="2025-08-12T23:47:25.573517657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:25.574493 containerd[1536]: time="2025-08-12T23:47:25.574360578Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.12281109s" Aug 12 23:47:25.574493 containerd[1536]: time="2025-08-12T23:47:25.574400698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Aug 12 23:47:25.577866 containerd[1536]: time="2025-08-12T23:47:25.577808659Z" level=info msg="CreateContainer within sandbox \"c450bb067dbe8be739bf5ae16efce46eca5de9e963d073875273199ecff70bcb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 12 23:47:25.599334 containerd[1536]: time="2025-08-12T23:47:25.599290788Z" level=info msg="Container c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:25.607121 containerd[1536]: time="2025-08-12T23:47:25.607059032Z" level=info msg="CreateContainer within sandbox \"c450bb067dbe8be739bf5ae16efce46eca5de9e963d073875273199ecff70bcb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a\"" Aug 12 23:47:25.609098 containerd[1536]: time="2025-08-12T23:47:25.607691752Z" level=info msg="StartContainer for \"c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a\"" Aug 12 23:47:25.609290 containerd[1536]: time="2025-08-12T23:47:25.609265792Z" level=info msg="connecting to shim c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a" address="unix:///run/containerd/s/70ea41b22ca2dd8eec5d344ce9b400a88871657b9f8298179bdfb7e55eaac518" protocol=ttrpc version=3 Aug 12 23:47:25.632325 systemd[1]: Started cri-containerd-c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a.scope - libcontainer container c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a. Aug 12 23:47:25.656132 kubelet[2659]: I0812 23:47:25.655922 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:47:25.676323 containerd[1536]: time="2025-08-12T23:47:25.676256381Z" level=info msg="StartContainer for \"c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a\" returns successfully" Aug 12 23:47:26.238238 systemd[1]: cri-containerd-c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a.scope: Deactivated successfully. Aug 12 23:47:26.238875 systemd[1]: cri-containerd-c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a.scope: Consumed 472ms CPU time, 174.9M memory peak, 2.4M read from disk, 165.8M written to disk. Aug 12 23:47:26.245426 kubelet[2659]: I0812 23:47:26.245217 2659 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 12 23:47:26.247444 containerd[1536]: time="2025-08-12T23:47:26.247287859Z" level=info msg="received exit event container_id:\"c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a\" id:\"c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a\" pid:3464 exited_at:{seconds:1755042446 nanos:246827299}" Aug 12 23:47:26.247444 containerd[1536]: time="2025-08-12T23:47:26.247405739Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a\" id:\"c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a\" pid:3464 exited_at:{seconds:1755042446 nanos:246827299}" Aug 12 23:47:26.277206 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9f620e8c8b6d8e2548860a86fcfe37c769e14345b5e2c26802f57f955c7ce1a-rootfs.mount: Deactivated successfully. Aug 12 23:47:26.299333 systemd[1]: Created slice kubepods-besteffort-pod4a1b2d49_cbd2_4a96_bae1_a97cd4c7ed3f.slice - libcontainer container kubepods-besteffort-pod4a1b2d49_cbd2_4a96_bae1_a97cd4c7ed3f.slice. Aug 12 23:47:26.310721 systemd[1]: Created slice kubepods-besteffort-pod26eaddaa_bc86_4fd0_be2f_2ee51dea7bee.slice - libcontainer container kubepods-besteffort-pod26eaddaa_bc86_4fd0_be2f_2ee51dea7bee.slice. Aug 12 23:47:26.319161 systemd[1]: Created slice kubepods-burstable-pod85871963_4597_475f_bb66_8481545d63bc.slice - libcontainer container kubepods-burstable-pod85871963_4597_475f_bb66_8481545d63bc.slice. Aug 12 23:47:26.324731 kubelet[2659]: I0812 23:47:26.324653 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7qctv\" (UniqueName: \"kubernetes.io/projected/3b9ed652-c70b-48f0-b332-952d960c0797-kube-api-access-7qctv\") pod \"calico-apiserver-65dc884bfc-bjlzc\" (UID: \"3b9ed652-c70b-48f0-b332-952d960c0797\") " pod="calico-apiserver/calico-apiserver-65dc884bfc-bjlzc" Aug 12 23:47:26.325182 kubelet[2659]: I0812 23:47:26.325064 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/85871963-4597-475f-bb66-8481545d63bc-config-volume\") pod \"coredns-668d6bf9bc-58jfc\" (UID: \"85871963-4597-475f-bb66-8481545d63bc\") " pod="kube-system/coredns-668d6bf9bc-58jfc" Aug 12 23:47:26.325182 kubelet[2659]: I0812 23:47:26.325135 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px45q\" (UniqueName: \"kubernetes.io/projected/85871963-4597-475f-bb66-8481545d63bc-kube-api-access-px45q\") pod \"coredns-668d6bf9bc-58jfc\" (UID: \"85871963-4597-475f-bb66-8481545d63bc\") " pod="kube-system/coredns-668d6bf9bc-58jfc" Aug 12 23:47:26.325182 kubelet[2659]: I0812 23:47:26.325162 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3b9ed652-c70b-48f0-b332-952d960c0797-calico-apiserver-certs\") pod \"calico-apiserver-65dc884bfc-bjlzc\" (UID: \"3b9ed652-c70b-48f0-b332-952d960c0797\") " pod="calico-apiserver/calico-apiserver-65dc884bfc-bjlzc" Aug 12 23:47:26.325946 systemd[1]: Created slice kubepods-burstable-podb141f2df_c1f3_415f_9b35_005f841dcaa4.slice - libcontainer container kubepods-burstable-podb141f2df_c1f3_415f_9b35_005f841dcaa4.slice. Aug 12 23:47:26.326387 kubelet[2659]: I0812 23:47:26.326339 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f-calico-apiserver-certs\") pod \"calico-apiserver-65dc884bfc-plshj\" (UID: \"4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f\") " pod="calico-apiserver/calico-apiserver-65dc884bfc-plshj" Aug 12 23:47:26.326477 kubelet[2659]: I0812 23:47:26.326463 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d1261d0-4280-4aa3-b2da-87a6eebf6570-whisker-ca-bundle\") pod \"whisker-58dd695b67-fgtb7\" (UID: \"4d1261d0-4280-4aa3-b2da-87a6eebf6570\") " pod="calico-system/whisker-58dd695b67-fgtb7" Aug 12 23:47:26.326605 kubelet[2659]: I0812 23:47:26.326558 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b141f2df-c1f3-415f-9b35-005f841dcaa4-config-volume\") pod \"coredns-668d6bf9bc-lwfzw\" (UID: \"b141f2df-c1f3-415f-9b35-005f841dcaa4\") " pod="kube-system/coredns-668d6bf9bc-lwfzw" Aug 12 23:47:26.326605 kubelet[2659]: I0812 23:47:26.326582 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kbdm\" (UniqueName: \"kubernetes.io/projected/4d1261d0-4280-4aa3-b2da-87a6eebf6570-kube-api-access-5kbdm\") pod \"whisker-58dd695b67-fgtb7\" (UID: \"4d1261d0-4280-4aa3-b2da-87a6eebf6570\") " pod="calico-system/whisker-58dd695b67-fgtb7" Aug 12 23:47:26.326706 kubelet[2659]: I0812 23:47:26.326693 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nz8hh\" (UniqueName: \"kubernetes.io/projected/b141f2df-c1f3-415f-9b35-005f841dcaa4-kube-api-access-nz8hh\") pod \"coredns-668d6bf9bc-lwfzw\" (UID: \"b141f2df-c1f3-415f-9b35-005f841dcaa4\") " pod="kube-system/coredns-668d6bf9bc-lwfzw" Aug 12 23:47:26.326888 kubelet[2659]: I0812 23:47:26.326774 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4d1261d0-4280-4aa3-b2da-87a6eebf6570-whisker-backend-key-pair\") pod \"whisker-58dd695b67-fgtb7\" (UID: \"4d1261d0-4280-4aa3-b2da-87a6eebf6570\") " pod="calico-system/whisker-58dd695b67-fgtb7" Aug 12 23:47:26.326888 kubelet[2659]: I0812 23:47:26.326808 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqzsw\" (UniqueName: \"kubernetes.io/projected/26eaddaa-bc86-4fd0-be2f-2ee51dea7bee-kube-api-access-rqzsw\") pod \"calico-kube-controllers-6857dbbf66-llfb4\" (UID: \"26eaddaa-bc86-4fd0-be2f-2ee51dea7bee\") " pod="calico-system/calico-kube-controllers-6857dbbf66-llfb4" Aug 12 23:47:26.327089 kubelet[2659]: I0812 23:47:26.327045 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsmg\" (UniqueName: \"kubernetes.io/projected/4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f-kube-api-access-vbsmg\") pod \"calico-apiserver-65dc884bfc-plshj\" (UID: \"4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f\") " pod="calico-apiserver/calico-apiserver-65dc884bfc-plshj" Aug 12 23:47:26.327661 kubelet[2659]: I0812 23:47:26.327641 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26eaddaa-bc86-4fd0-be2f-2ee51dea7bee-tigera-ca-bundle\") pod \"calico-kube-controllers-6857dbbf66-llfb4\" (UID: \"26eaddaa-bc86-4fd0-be2f-2ee51dea7bee\") " pod="calico-system/calico-kube-controllers-6857dbbf66-llfb4" Aug 12 23:47:26.332549 systemd[1]: Created slice kubepods-besteffort-pod4d1261d0_4280_4aa3_b2da_87a6eebf6570.slice - libcontainer container kubepods-besteffort-pod4d1261d0_4280_4aa3_b2da_87a6eebf6570.slice. Aug 12 23:47:26.337539 systemd[1]: Created slice kubepods-besteffort-pod3b9ed652_c70b_48f0_b332_952d960c0797.slice - libcontainer container kubepods-besteffort-pod3b9ed652_c70b_48f0_b332_952d960c0797.slice. Aug 12 23:47:26.379767 systemd[1]: Created slice kubepods-besteffort-podf0d096b2_e96c_4424_850b_b88da8b049a1.slice - libcontainer container kubepods-besteffort-podf0d096b2_e96c_4424_850b_b88da8b049a1.slice. Aug 12 23:47:26.387258 systemd[1]: Created slice kubepods-besteffort-podf55ce75f_fae0_4c9d_a6a9_d30c0ef88a50.slice - libcontainer container kubepods-besteffort-podf55ce75f_fae0_4c9d_a6a9_d30c0ef88a50.slice. Aug 12 23:47:26.411129 containerd[1536]: time="2025-08-12T23:47:26.411086645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pptww,Uid:f0d096b2-e96c-4424-850b-b88da8b049a1,Namespace:calico-system,Attempt:0,}" Aug 12 23:47:26.429068 kubelet[2659]: I0812 23:47:26.429014 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50-config\") pod \"goldmane-768f4c5c69-v568g\" (UID: \"f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50\") " pod="calico-system/goldmane-768f4c5c69-v568g" Aug 12 23:47:26.429224 kubelet[2659]: I0812 23:47:26.429110 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8f5b\" (UniqueName: \"kubernetes.io/projected/f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50-kube-api-access-m8f5b\") pod \"goldmane-768f4c5c69-v568g\" (UID: \"f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50\") " pod="calico-system/goldmane-768f4c5c69-v568g" Aug 12 23:47:26.429224 kubelet[2659]: I0812 23:47:26.429170 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-v568g\" (UID: \"f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50\") " pod="calico-system/goldmane-768f4c5c69-v568g" Aug 12 23:47:26.429224 kubelet[2659]: I0812 23:47:26.429219 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50-goldmane-key-pair\") pod \"goldmane-768f4c5c69-v568g\" (UID: \"f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50\") " pod="calico-system/goldmane-768f4c5c69-v568g" Aug 12 23:47:26.465259 containerd[1536]: time="2025-08-12T23:47:26.465217826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 12 23:47:26.582933 containerd[1536]: time="2025-08-12T23:47:26.582140713Z" level=error msg="Failed to destroy network for sandbox \"80f431a339b17b8f95d4ed6055218c34d88ba7b582ab9f5fc176852f9ca95fc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.584436 containerd[1536]: time="2025-08-12T23:47:26.584389554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pptww,Uid:f0d096b2-e96c-4424-850b-b88da8b049a1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"80f431a339b17b8f95d4ed6055218c34d88ba7b582ab9f5fc176852f9ca95fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.584883 kubelet[2659]: E0812 23:47:26.584817 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80f431a339b17b8f95d4ed6055218c34d88ba7b582ab9f5fc176852f9ca95fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.585041 kubelet[2659]: E0812 23:47:26.584981 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80f431a339b17b8f95d4ed6055218c34d88ba7b582ab9f5fc176852f9ca95fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pptww" Aug 12 23:47:26.585041 kubelet[2659]: E0812 23:47:26.585008 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"80f431a339b17b8f95d4ed6055218c34d88ba7b582ab9f5fc176852f9ca95fc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pptww" Aug 12 23:47:26.585222 kubelet[2659]: E0812 23:47:26.585180 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pptww_calico-system(f0d096b2-e96c-4424-850b-b88da8b049a1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pptww_calico-system(f0d096b2-e96c-4424-850b-b88da8b049a1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"80f431a339b17b8f95d4ed6055218c34d88ba7b582ab9f5fc176852f9ca95fc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pptww" podUID="f0d096b2-e96c-4424-850b-b88da8b049a1" Aug 12 23:47:26.610344 systemd[1]: run-netns-cni\x2d7694aa6c\x2daff5\x2d105f\x2dc232\x2de4bcfe11d53b.mount: Deactivated successfully. Aug 12 23:47:26.614653 containerd[1536]: time="2025-08-12T23:47:26.614614126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dc884bfc-plshj,Uid:4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:47:26.616319 containerd[1536]: time="2025-08-12T23:47:26.616285807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6857dbbf66-llfb4,Uid:26eaddaa-bc86-4fd0-be2f-2ee51dea7bee,Namespace:calico-system,Attempt:0,}" Aug 12 23:47:26.627334 containerd[1536]: time="2025-08-12T23:47:26.627284011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58jfc,Uid:85871963-4597-475f-bb66-8481545d63bc,Namespace:kube-system,Attempt:0,}" Aug 12 23:47:26.634109 containerd[1536]: time="2025-08-12T23:47:26.632675213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lwfzw,Uid:b141f2df-c1f3-415f-9b35-005f841dcaa4,Namespace:kube-system,Attempt:0,}" Aug 12 23:47:26.637120 containerd[1536]: time="2025-08-12T23:47:26.636993255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58dd695b67-fgtb7,Uid:4d1261d0-4280-4aa3-b2da-87a6eebf6570,Namespace:calico-system,Attempt:0,}" Aug 12 23:47:26.658177 containerd[1536]: time="2025-08-12T23:47:26.658129384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dc884bfc-bjlzc,Uid:3b9ed652-c70b-48f0-b332-952d960c0797,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:47:26.692036 containerd[1536]: time="2025-08-12T23:47:26.691990197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-v568g,Uid:f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50,Namespace:calico-system,Attempt:0,}" Aug 12 23:47:26.732035 containerd[1536]: time="2025-08-12T23:47:26.731385413Z" level=error msg="Failed to destroy network for sandbox \"166f5c460aef649c9a8abf26f84a88235daff5a1a32506494bc96a500c081f92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.735481 containerd[1536]: time="2025-08-12T23:47:26.732410733Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dc884bfc-plshj,Uid:4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"166f5c460aef649c9a8abf26f84a88235daff5a1a32506494bc96a500c081f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.736245 kubelet[2659]: E0812 23:47:26.735661 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"166f5c460aef649c9a8abf26f84a88235daff5a1a32506494bc96a500c081f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.736245 kubelet[2659]: E0812 23:47:26.735729 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"166f5c460aef649c9a8abf26f84a88235daff5a1a32506494bc96a500c081f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dc884bfc-plshj" Aug 12 23:47:26.736245 kubelet[2659]: E0812 23:47:26.735749 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"166f5c460aef649c9a8abf26f84a88235daff5a1a32506494bc96a500c081f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dc884bfc-plshj" Aug 12 23:47:26.736683 kubelet[2659]: E0812 23:47:26.735792 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65dc884bfc-plshj_calico-apiserver(4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65dc884bfc-plshj_calico-apiserver(4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"166f5c460aef649c9a8abf26f84a88235daff5a1a32506494bc96a500c081f92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65dc884bfc-plshj" podUID="4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f" Aug 12 23:47:26.739764 containerd[1536]: time="2025-08-12T23:47:26.739720256Z" level=error msg="Failed to destroy network for sandbox \"51b64faa6dd1a68bcbd2f3e9b7adfd322e1c5ac93dd6b0630f6aa89f2fd888f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.740253 containerd[1536]: time="2025-08-12T23:47:26.740215737Z" level=error msg="Failed to destroy network for sandbox \"cf83ae435c4ff74d1a44348a93e60df50ff26460e04d014cef2a3d13203906ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.741051 containerd[1536]: time="2025-08-12T23:47:26.741004697Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lwfzw,Uid:b141f2df-c1f3-415f-9b35-005f841dcaa4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"51b64faa6dd1a68bcbd2f3e9b7adfd322e1c5ac93dd6b0630f6aa89f2fd888f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.741668 kubelet[2659]: E0812 23:47:26.741261 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51b64faa6dd1a68bcbd2f3e9b7adfd322e1c5ac93dd6b0630f6aa89f2fd888f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.741668 kubelet[2659]: E0812 23:47:26.741330 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51b64faa6dd1a68bcbd2f3e9b7adfd322e1c5ac93dd6b0630f6aa89f2fd888f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lwfzw" Aug 12 23:47:26.741668 kubelet[2659]: E0812 23:47:26.741349 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51b64faa6dd1a68bcbd2f3e9b7adfd322e1c5ac93dd6b0630f6aa89f2fd888f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-lwfzw" Aug 12 23:47:26.741875 kubelet[2659]: E0812 23:47:26.741409 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-lwfzw_kube-system(b141f2df-c1f3-415f-9b35-005f841dcaa4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-lwfzw_kube-system(b141f2df-c1f3-415f-9b35-005f841dcaa4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51b64faa6dd1a68bcbd2f3e9b7adfd322e1c5ac93dd6b0630f6aa89f2fd888f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-lwfzw" podUID="b141f2df-c1f3-415f-9b35-005f841dcaa4" Aug 12 23:47:26.742287 containerd[1536]: time="2025-08-12T23:47:26.742235617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6857dbbf66-llfb4,Uid:26eaddaa-bc86-4fd0-be2f-2ee51dea7bee,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf83ae435c4ff74d1a44348a93e60df50ff26460e04d014cef2a3d13203906ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.742532 kubelet[2659]: E0812 23:47:26.742467 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf83ae435c4ff74d1a44348a93e60df50ff26460e04d014cef2a3d13203906ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.742714 kubelet[2659]: E0812 23:47:26.742619 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf83ae435c4ff74d1a44348a93e60df50ff26460e04d014cef2a3d13203906ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6857dbbf66-llfb4" Aug 12 23:47:26.742714 kubelet[2659]: E0812 23:47:26.742646 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf83ae435c4ff74d1a44348a93e60df50ff26460e04d014cef2a3d13203906ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6857dbbf66-llfb4" Aug 12 23:47:26.742904 kubelet[2659]: E0812 23:47:26.742831 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6857dbbf66-llfb4_calico-system(26eaddaa-bc86-4fd0-be2f-2ee51dea7bee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6857dbbf66-llfb4_calico-system(26eaddaa-bc86-4fd0-be2f-2ee51dea7bee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf83ae435c4ff74d1a44348a93e60df50ff26460e04d014cef2a3d13203906ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6857dbbf66-llfb4" podUID="26eaddaa-bc86-4fd0-be2f-2ee51dea7bee" Aug 12 23:47:26.745001 containerd[1536]: time="2025-08-12T23:47:26.744716418Z" level=error msg="Failed to destroy network for sandbox \"5e6a175c668e5d638173b32edeed7ce0fdf191dd62ac02b2e3d96d0960e0325d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.748352 containerd[1536]: time="2025-08-12T23:47:26.747961140Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dc884bfc-bjlzc,Uid:3b9ed652-c70b-48f0-b332-952d960c0797,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e6a175c668e5d638173b32edeed7ce0fdf191dd62ac02b2e3d96d0960e0325d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.749047 kubelet[2659]: E0812 23:47:26.749008 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e6a175c668e5d638173b32edeed7ce0fdf191dd62ac02b2e3d96d0960e0325d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.749175 kubelet[2659]: E0812 23:47:26.749064 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e6a175c668e5d638173b32edeed7ce0fdf191dd62ac02b2e3d96d0960e0325d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dc884bfc-bjlzc" Aug 12 23:47:26.749175 kubelet[2659]: E0812 23:47:26.749115 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5e6a175c668e5d638173b32edeed7ce0fdf191dd62ac02b2e3d96d0960e0325d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-65dc884bfc-bjlzc" Aug 12 23:47:26.749237 kubelet[2659]: E0812 23:47:26.749167 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-65dc884bfc-bjlzc_calico-apiserver(3b9ed652-c70b-48f0-b332-952d960c0797)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-65dc884bfc-bjlzc_calico-apiserver(3b9ed652-c70b-48f0-b332-952d960c0797)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5e6a175c668e5d638173b32edeed7ce0fdf191dd62ac02b2e3d96d0960e0325d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-65dc884bfc-bjlzc" podUID="3b9ed652-c70b-48f0-b332-952d960c0797" Aug 12 23:47:26.758568 containerd[1536]: time="2025-08-12T23:47:26.758510984Z" level=error msg="Failed to destroy network for sandbox \"d15e77b2e66ac76c46bb21636ba354d1f3d8ea5701a4b4137b3644fcd407ca1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.759595 containerd[1536]: time="2025-08-12T23:47:26.759550424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58dd695b67-fgtb7,Uid:4d1261d0-4280-4aa3-b2da-87a6eebf6570,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d15e77b2e66ac76c46bb21636ba354d1f3d8ea5701a4b4137b3644fcd407ca1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.759926 kubelet[2659]: E0812 23:47:26.759894 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d15e77b2e66ac76c46bb21636ba354d1f3d8ea5701a4b4137b3644fcd407ca1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.760185 kubelet[2659]: E0812 23:47:26.759990 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d15e77b2e66ac76c46bb21636ba354d1f3d8ea5701a4b4137b3644fcd407ca1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58dd695b67-fgtb7" Aug 12 23:47:26.760185 kubelet[2659]: E0812 23:47:26.760010 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d15e77b2e66ac76c46bb21636ba354d1f3d8ea5701a4b4137b3644fcd407ca1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-58dd695b67-fgtb7" Aug 12 23:47:26.765636 kubelet[2659]: E0812 23:47:26.760293 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-58dd695b67-fgtb7_calico-system(4d1261d0-4280-4aa3-b2da-87a6eebf6570)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-58dd695b67-fgtb7_calico-system(4d1261d0-4280-4aa3-b2da-87a6eebf6570)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d15e77b2e66ac76c46bb21636ba354d1f3d8ea5701a4b4137b3644fcd407ca1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-58dd695b67-fgtb7" podUID="4d1261d0-4280-4aa3-b2da-87a6eebf6570" Aug 12 23:47:26.767922 containerd[1536]: time="2025-08-12T23:47:26.767880908Z" level=error msg="Failed to destroy network for sandbox \"224e6fa7dfc98307921dc83433ecac90dca5db373ce25fa35d4ea9bff5b33b98\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.769202 containerd[1536]: time="2025-08-12T23:47:26.769141948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58jfc,Uid:85871963-4597-475f-bb66-8481545d63bc,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"224e6fa7dfc98307921dc83433ecac90dca5db373ce25fa35d4ea9bff5b33b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.769437 kubelet[2659]: E0812 23:47:26.769402 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"224e6fa7dfc98307921dc83433ecac90dca5db373ce25fa35d4ea9bff5b33b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.769515 kubelet[2659]: E0812 23:47:26.769460 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"224e6fa7dfc98307921dc83433ecac90dca5db373ce25fa35d4ea9bff5b33b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-58jfc" Aug 12 23:47:26.769515 kubelet[2659]: E0812 23:47:26.769481 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"224e6fa7dfc98307921dc83433ecac90dca5db373ce25fa35d4ea9bff5b33b98\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-58jfc" Aug 12 23:47:26.769672 kubelet[2659]: E0812 23:47:26.769526 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-58jfc_kube-system(85871963-4597-475f-bb66-8481545d63bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-58jfc_kube-system(85871963-4597-475f-bb66-8481545d63bc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"224e6fa7dfc98307921dc83433ecac90dca5db373ce25fa35d4ea9bff5b33b98\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-58jfc" podUID="85871963-4597-475f-bb66-8481545d63bc" Aug 12 23:47:26.776430 containerd[1536]: time="2025-08-12T23:47:26.776360791Z" level=error msg="Failed to destroy network for sandbox \"1e544a4abf94894bffcbab253c04943ae18111496b84923d8de3b7d9a1f58b65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.777313 containerd[1536]: time="2025-08-12T23:47:26.777269191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-v568g,Uid:f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e544a4abf94894bffcbab253c04943ae18111496b84923d8de3b7d9a1f58b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.777582 kubelet[2659]: E0812 23:47:26.777546 2659 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e544a4abf94894bffcbab253c04943ae18111496b84923d8de3b7d9a1f58b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 12 23:47:26.777640 kubelet[2659]: E0812 23:47:26.777611 2659 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e544a4abf94894bffcbab253c04943ae18111496b84923d8de3b7d9a1f58b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-v568g" Aug 12 23:47:26.777695 kubelet[2659]: E0812 23:47:26.777638 2659 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e544a4abf94894bffcbab253c04943ae18111496b84923d8de3b7d9a1f58b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-v568g" Aug 12 23:47:26.777722 kubelet[2659]: E0812 23:47:26.777683 2659 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-v568g_calico-system(f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-v568g_calico-system(f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e544a4abf94894bffcbab253c04943ae18111496b84923d8de3b7d9a1f58b65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-v568g" podUID="f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50" Aug 12 23:47:27.601075 systemd[1]: run-netns-cni\x2d5f39c388\x2d4495\x2d64b6\x2d5de2\x2d4f75bd7a18f9.mount: Deactivated successfully. Aug 12 23:47:27.601193 systemd[1]: run-netns-cni\x2de9d8c3ad\x2df6c2\x2de676\x2d459e\x2df599c4bc0ea0.mount: Deactivated successfully. Aug 12 23:47:27.601236 systemd[1]: run-netns-cni\x2d33bc37ad\x2d67fb\x2df0ed\x2d3eef\x2dd4200ad370dc.mount: Deactivated successfully. Aug 12 23:47:27.601281 systemd[1]: run-netns-cni\x2d9011242a\x2d070c\x2dc35f\x2dbce7\x2dffdf0cbcfd5b.mount: Deactivated successfully. Aug 12 23:47:30.590637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2577603550.mount: Deactivated successfully. Aug 12 23:47:30.913768 containerd[1536]: time="2025-08-12T23:47:30.913141983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:30.913768 containerd[1536]: time="2025-08-12T23:47:30.913733663Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Aug 12 23:47:30.921317 containerd[1536]: time="2025-08-12T23:47:30.921265345Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 4.455991479s" Aug 12 23:47:30.921317 containerd[1536]: time="2025-08-12T23:47:30.921310345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Aug 12 23:47:30.930111 containerd[1536]: time="2025-08-12T23:47:30.928061507Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:30.930111 containerd[1536]: time="2025-08-12T23:47:30.928844428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:30.932887 containerd[1536]: time="2025-08-12T23:47:30.932828909Z" level=info msg="CreateContainer within sandbox \"c450bb067dbe8be739bf5ae16efce46eca5de9e963d073875273199ecff70bcb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 12 23:47:30.948074 containerd[1536]: time="2025-08-12T23:47:30.948036474Z" level=info msg="Container 9a3ef371fc2716a3d82f04e28b6e1ba27802157b3762a34c5de0613bed69a435: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:30.957423 containerd[1536]: time="2025-08-12T23:47:30.957377517Z" level=info msg="CreateContainer within sandbox \"c450bb067dbe8be739bf5ae16efce46eca5de9e963d073875273199ecff70bcb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"9a3ef371fc2716a3d82f04e28b6e1ba27802157b3762a34c5de0613bed69a435\"" Aug 12 23:47:30.958101 containerd[1536]: time="2025-08-12T23:47:30.958016277Z" level=info msg="StartContainer for \"9a3ef371fc2716a3d82f04e28b6e1ba27802157b3762a34c5de0613bed69a435\"" Aug 12 23:47:30.959730 containerd[1536]: time="2025-08-12T23:47:30.959696557Z" level=info msg="connecting to shim 9a3ef371fc2716a3d82f04e28b6e1ba27802157b3762a34c5de0613bed69a435" address="unix:///run/containerd/s/70ea41b22ca2dd8eec5d344ce9b400a88871657b9f8298179bdfb7e55eaac518" protocol=ttrpc version=3 Aug 12 23:47:30.979341 systemd[1]: Started cri-containerd-9a3ef371fc2716a3d82f04e28b6e1ba27802157b3762a34c5de0613bed69a435.scope - libcontainer container 9a3ef371fc2716a3d82f04e28b6e1ba27802157b3762a34c5de0613bed69a435. Aug 12 23:47:31.027229 containerd[1536]: time="2025-08-12T23:47:31.027123818Z" level=info msg="StartContainer for \"9a3ef371fc2716a3d82f04e28b6e1ba27802157b3762a34c5de0613bed69a435\" returns successfully" Aug 12 23:47:31.223446 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 12 23:47:31.223585 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 12 23:47:31.361614 kubelet[2659]: I0812 23:47:31.361567 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kbdm\" (UniqueName: \"kubernetes.io/projected/4d1261d0-4280-4aa3-b2da-87a6eebf6570-kube-api-access-5kbdm\") pod \"4d1261d0-4280-4aa3-b2da-87a6eebf6570\" (UID: \"4d1261d0-4280-4aa3-b2da-87a6eebf6570\") " Aug 12 23:47:31.362037 kubelet[2659]: I0812 23:47:31.361625 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4d1261d0-4280-4aa3-b2da-87a6eebf6570-whisker-backend-key-pair\") pod \"4d1261d0-4280-4aa3-b2da-87a6eebf6570\" (UID: \"4d1261d0-4280-4aa3-b2da-87a6eebf6570\") " Aug 12 23:47:31.362037 kubelet[2659]: I0812 23:47:31.361650 2659 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d1261d0-4280-4aa3-b2da-87a6eebf6570-whisker-ca-bundle\") pod \"4d1261d0-4280-4aa3-b2da-87a6eebf6570\" (UID: \"4d1261d0-4280-4aa3-b2da-87a6eebf6570\") " Aug 12 23:47:31.376876 kubelet[2659]: I0812 23:47:31.376830 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d1261d0-4280-4aa3-b2da-87a6eebf6570-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "4d1261d0-4280-4aa3-b2da-87a6eebf6570" (UID: "4d1261d0-4280-4aa3-b2da-87a6eebf6570"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 12 23:47:31.377671 kubelet[2659]: I0812 23:47:31.377641 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d1261d0-4280-4aa3-b2da-87a6eebf6570-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "4d1261d0-4280-4aa3-b2da-87a6eebf6570" (UID: "4d1261d0-4280-4aa3-b2da-87a6eebf6570"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 12 23:47:31.379591 kubelet[2659]: I0812 23:47:31.379554 2659 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d1261d0-4280-4aa3-b2da-87a6eebf6570-kube-api-access-5kbdm" (OuterVolumeSpecName: "kube-api-access-5kbdm") pod "4d1261d0-4280-4aa3-b2da-87a6eebf6570" (UID: "4d1261d0-4280-4aa3-b2da-87a6eebf6570"). InnerVolumeSpecName "kube-api-access-5kbdm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 12 23:47:31.466568 kubelet[2659]: I0812 23:47:31.462186 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d1261d0-4280-4aa3-b2da-87a6eebf6570-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Aug 12 23:47:31.466568 kubelet[2659]: I0812 23:47:31.466577 2659 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5kbdm\" (UniqueName: \"kubernetes.io/projected/4d1261d0-4280-4aa3-b2da-87a6eebf6570-kube-api-access-5kbdm\") on node \"localhost\" DevicePath \"\"" Aug 12 23:47:31.466713 kubelet[2659]: I0812 23:47:31.466590 2659 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/4d1261d0-4280-4aa3-b2da-87a6eebf6570-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Aug 12 23:47:31.492715 systemd[1]: Removed slice kubepods-besteffort-pod4d1261d0_4280_4aa3_b2da_87a6eebf6570.slice - libcontainer container kubepods-besteffort-pod4d1261d0_4280_4aa3_b2da_87a6eebf6570.slice. Aug 12 23:47:31.508836 kubelet[2659]: I0812 23:47:31.508767 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-x48hw" podStartSLOduration=1.7629394010000001 podStartE2EDuration="12.508751918s" podCreationTimestamp="2025-08-12 23:47:19 +0000 UTC" firstStartedPulling="2025-08-12 23:47:20.176152429 +0000 UTC m=+22.911681953" lastFinishedPulling="2025-08-12 23:47:30.921964946 +0000 UTC m=+33.657494470" observedRunningTime="2025-08-12 23:47:31.508484238 +0000 UTC m=+34.244013802" watchObservedRunningTime="2025-08-12 23:47:31.508751918 +0000 UTC m=+34.244281442" Aug 12 23:47:31.567616 systemd[1]: Created slice kubepods-besteffort-pod57472ac5_2650_4635_b5bd_06017b95f5d8.slice - libcontainer container kubepods-besteffort-pod57472ac5_2650_4635_b5bd_06017b95f5d8.slice. Aug 12 23:47:31.590342 systemd[1]: var-lib-kubelet-pods-4d1261d0\x2d4280\x2d4aa3\x2db2da\x2d87a6eebf6570-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5kbdm.mount: Deactivated successfully. Aug 12 23:47:31.590448 systemd[1]: var-lib-kubelet-pods-4d1261d0\x2d4280\x2d4aa3\x2db2da\x2d87a6eebf6570-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 12 23:47:31.668516 kubelet[2659]: I0812 23:47:31.668380 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57472ac5-2650-4635-b5bd-06017b95f5d8-whisker-ca-bundle\") pod \"whisker-5c454b4b85-xcz8q\" (UID: \"57472ac5-2650-4635-b5bd-06017b95f5d8\") " pod="calico-system/whisker-5c454b4b85-xcz8q" Aug 12 23:47:31.668516 kubelet[2659]: I0812 23:47:31.668427 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57472ac5-2650-4635-b5bd-06017b95f5d8-whisker-backend-key-pair\") pod \"whisker-5c454b4b85-xcz8q\" (UID: \"57472ac5-2650-4635-b5bd-06017b95f5d8\") " pod="calico-system/whisker-5c454b4b85-xcz8q" Aug 12 23:47:31.668516 kubelet[2659]: I0812 23:47:31.668453 2659 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vm97n\" (UniqueName: \"kubernetes.io/projected/57472ac5-2650-4635-b5bd-06017b95f5d8-kube-api-access-vm97n\") pod \"whisker-5c454b4b85-xcz8q\" (UID: \"57472ac5-2650-4635-b5bd-06017b95f5d8\") " pod="calico-system/whisker-5c454b4b85-xcz8q" Aug 12 23:47:31.871319 containerd[1536]: time="2025-08-12T23:47:31.871270503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c454b4b85-xcz8q,Uid:57472ac5-2650-4635-b5bd-06017b95f5d8,Namespace:calico-system,Attempt:0,}" Aug 12 23:47:32.103441 systemd-networkd[1439]: cali6c73f02ee33: Link UP Aug 12 23:47:32.103919 systemd-networkd[1439]: cali6c73f02ee33: Gained carrier Aug 12 23:47:32.117694 containerd[1536]: 2025-08-12 23:47:31.911 [INFO][3842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 12 23:47:32.117694 containerd[1536]: 2025-08-12 23:47:31.956 [INFO][3842] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5c454b4b85--xcz8q-eth0 whisker-5c454b4b85- calico-system 57472ac5-2650-4635-b5bd-06017b95f5d8 898 0 2025-08-12 23:47:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5c454b4b85 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5c454b4b85-xcz8q eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali6c73f02ee33 [] [] }} ContainerID="dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" Namespace="calico-system" Pod="whisker-5c454b4b85-xcz8q" WorkloadEndpoint="localhost-k8s-whisker--5c454b4b85--xcz8q-" Aug 12 23:47:32.117694 containerd[1536]: 2025-08-12 23:47:31.956 [INFO][3842] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" Namespace="calico-system" Pod="whisker-5c454b4b85-xcz8q" WorkloadEndpoint="localhost-k8s-whisker--5c454b4b85--xcz8q-eth0" Aug 12 23:47:32.117694 containerd[1536]: 2025-08-12 23:47:32.058 [INFO][3855] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" HandleID="k8s-pod-network.dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" Workload="localhost-k8s-whisker--5c454b4b85--xcz8q-eth0" Aug 12 23:47:32.118075 containerd[1536]: 2025-08-12 23:47:32.058 [INFO][3855] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" HandleID="k8s-pod-network.dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" Workload="localhost-k8s-whisker--5c454b4b85--xcz8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400017e360), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5c454b4b85-xcz8q", "timestamp":"2025-08-12 23:47:32.058194676 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:47:32.118075 containerd[1536]: 2025-08-12 23:47:32.058 [INFO][3855] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:47:32.118075 containerd[1536]: 2025-08-12 23:47:32.058 [INFO][3855] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:47:32.118075 containerd[1536]: 2025-08-12 23:47:32.058 [INFO][3855] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:47:32.118075 containerd[1536]: 2025-08-12 23:47:32.071 [INFO][3855] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" host="localhost" Aug 12 23:47:32.118075 containerd[1536]: 2025-08-12 23:47:32.076 [INFO][3855] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:47:32.118075 containerd[1536]: 2025-08-12 23:47:32.079 [INFO][3855] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:47:32.118075 containerd[1536]: 2025-08-12 23:47:32.081 [INFO][3855] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:32.118075 containerd[1536]: 2025-08-12 23:47:32.083 [INFO][3855] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:32.118075 containerd[1536]: 2025-08-12 23:47:32.083 [INFO][3855] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" host="localhost" Aug 12 23:47:32.118308 containerd[1536]: 2025-08-12 23:47:32.084 [INFO][3855] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c Aug 12 23:47:32.118308 containerd[1536]: 2025-08-12 23:47:32.088 [INFO][3855] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" host="localhost" Aug 12 23:47:32.118308 containerd[1536]: 2025-08-12 23:47:32.092 [INFO][3855] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" host="localhost" Aug 12 23:47:32.118308 containerd[1536]: 2025-08-12 23:47:32.092 [INFO][3855] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" host="localhost" Aug 12 23:47:32.118308 containerd[1536]: 2025-08-12 23:47:32.092 [INFO][3855] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:47:32.118308 containerd[1536]: 2025-08-12 23:47:32.092 [INFO][3855] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" HandleID="k8s-pod-network.dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" Workload="localhost-k8s-whisker--5c454b4b85--xcz8q-eth0" Aug 12 23:47:32.118432 containerd[1536]: 2025-08-12 23:47:32.095 [INFO][3842] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" Namespace="calico-system" Pod="whisker-5c454b4b85-xcz8q" WorkloadEndpoint="localhost-k8s-whisker--5c454b4b85--xcz8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c454b4b85--xcz8q-eth0", GenerateName:"whisker-5c454b4b85-", Namespace:"calico-system", SelfLink:"", UID:"57472ac5-2650-4635-b5bd-06017b95f5d8", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c454b4b85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5c454b4b85-xcz8q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6c73f02ee33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:32.118432 containerd[1536]: 2025-08-12 23:47:32.095 [INFO][3842] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" Namespace="calico-system" Pod="whisker-5c454b4b85-xcz8q" WorkloadEndpoint="localhost-k8s-whisker--5c454b4b85--xcz8q-eth0" Aug 12 23:47:32.118512 containerd[1536]: 2025-08-12 23:47:32.095 [INFO][3842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6c73f02ee33 ContainerID="dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" Namespace="calico-system" Pod="whisker-5c454b4b85-xcz8q" WorkloadEndpoint="localhost-k8s-whisker--5c454b4b85--xcz8q-eth0" Aug 12 23:47:32.118512 containerd[1536]: 2025-08-12 23:47:32.105 [INFO][3842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" Namespace="calico-system" Pod="whisker-5c454b4b85-xcz8q" WorkloadEndpoint="localhost-k8s-whisker--5c454b4b85--xcz8q-eth0" Aug 12 23:47:32.118554 containerd[1536]: 2025-08-12 23:47:32.106 [INFO][3842] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" Namespace="calico-system" Pod="whisker-5c454b4b85-xcz8q" WorkloadEndpoint="localhost-k8s-whisker--5c454b4b85--xcz8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5c454b4b85--xcz8q-eth0", GenerateName:"whisker-5c454b4b85-", Namespace:"calico-system", SelfLink:"", UID:"57472ac5-2650-4635-b5bd-06017b95f5d8", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5c454b4b85", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c", Pod:"whisker-5c454b4b85-xcz8q", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali6c73f02ee33", MAC:"4a:00:77:b0:7d:64", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:32.118596 containerd[1536]: 2025-08-12 23:47:32.115 [INFO][3842] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" Namespace="calico-system" Pod="whisker-5c454b4b85-xcz8q" WorkloadEndpoint="localhost-k8s-whisker--5c454b4b85--xcz8q-eth0" Aug 12 23:47:32.178476 containerd[1536]: time="2025-08-12T23:47:32.178422149Z" level=info msg="connecting to shim dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c" address="unix:///run/containerd/s/795de3432d9b18a17dd148bd860bd402c76366c33bd9af2e1fdfdbaedb537fb4" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:32.209239 systemd[1]: Started cri-containerd-dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c.scope - libcontainer container dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c. Aug 12 23:47:32.221058 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:47:32.240197 containerd[1536]: time="2025-08-12T23:47:32.240150806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c454b4b85-xcz8q,Uid:57472ac5-2650-4635-b5bd-06017b95f5d8,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c\"" Aug 12 23:47:32.241519 containerd[1536]: time="2025-08-12T23:47:32.241487446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 12 23:47:32.488749 kubelet[2659]: I0812 23:47:32.488716 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:47:32.912848 systemd-networkd[1439]: vxlan.calico: Link UP Aug 12 23:47:32.912855 systemd-networkd[1439]: vxlan.calico: Gained carrier Aug 12 23:47:33.339720 containerd[1536]: time="2025-08-12T23:47:33.339326299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:33.340196 containerd[1536]: time="2025-08-12T23:47:33.339922699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Aug 12 23:47:33.341219 containerd[1536]: time="2025-08-12T23:47:33.341192500Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:33.344013 containerd[1536]: time="2025-08-12T23:47:33.343981660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:33.345231 containerd[1536]: time="2025-08-12T23:47:33.345200541Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.103676455s" Aug 12 23:47:33.345231 containerd[1536]: time="2025-08-12T23:47:33.345230661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Aug 12 23:47:33.345957 kubelet[2659]: I0812 23:47:33.345931 2659 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d1261d0-4280-4aa3-b2da-87a6eebf6570" path="/var/lib/kubelet/pods/4d1261d0-4280-4aa3-b2da-87a6eebf6570/volumes" Aug 12 23:47:33.347179 containerd[1536]: time="2025-08-12T23:47:33.347148741Z" level=info msg="CreateContainer within sandbox \"dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 12 23:47:33.353856 containerd[1536]: time="2025-08-12T23:47:33.353153343Z" level=info msg="Container 6b762cccde9baf6c3183e2c154ffeb822fe95f8f065520ce1a036049c276d9ed: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:33.362741 containerd[1536]: time="2025-08-12T23:47:33.362617065Z" level=info msg="CreateContainer within sandbox \"dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"6b762cccde9baf6c3183e2c154ffeb822fe95f8f065520ce1a036049c276d9ed\"" Aug 12 23:47:33.363291 containerd[1536]: time="2025-08-12T23:47:33.363263305Z" level=info msg="StartContainer for \"6b762cccde9baf6c3183e2c154ffeb822fe95f8f065520ce1a036049c276d9ed\"" Aug 12 23:47:33.364533 containerd[1536]: time="2025-08-12T23:47:33.364509346Z" level=info msg="connecting to shim 6b762cccde9baf6c3183e2c154ffeb822fe95f8f065520ce1a036049c276d9ed" address="unix:///run/containerd/s/795de3432d9b18a17dd148bd860bd402c76366c33bd9af2e1fdfdbaedb537fb4" protocol=ttrpc version=3 Aug 12 23:47:33.390281 systemd[1]: Started cri-containerd-6b762cccde9baf6c3183e2c154ffeb822fe95f8f065520ce1a036049c276d9ed.scope - libcontainer container 6b762cccde9baf6c3183e2c154ffeb822fe95f8f065520ce1a036049c276d9ed. Aug 12 23:47:33.423124 systemd-networkd[1439]: cali6c73f02ee33: Gained IPv6LL Aug 12 23:47:33.449183 containerd[1536]: time="2025-08-12T23:47:33.449148327Z" level=info msg="StartContainer for \"6b762cccde9baf6c3183e2c154ffeb822fe95f8f065520ce1a036049c276d9ed\" returns successfully" Aug 12 23:47:33.452595 containerd[1536]: time="2025-08-12T23:47:33.452568168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 12 23:47:34.955459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007303756.mount: Deactivated successfully. Aug 12 23:47:34.957468 systemd-networkd[1439]: vxlan.calico: Gained IPv6LL Aug 12 23:47:34.988577 containerd[1536]: time="2025-08-12T23:47:34.988527545Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:34.988982 containerd[1536]: time="2025-08-12T23:47:34.988953945Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Aug 12 23:47:34.989703 containerd[1536]: time="2025-08-12T23:47:34.989667105Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:34.997368 containerd[1536]: time="2025-08-12T23:47:34.997305787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:34.997982 containerd[1536]: time="2025-08-12T23:47:34.997861987Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.545260539s" Aug 12 23:47:34.997982 containerd[1536]: time="2025-08-12T23:47:34.997893627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Aug 12 23:47:35.001817 containerd[1536]: time="2025-08-12T23:47:35.001787548Z" level=info msg="CreateContainer within sandbox \"dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 12 23:47:35.008936 containerd[1536]: time="2025-08-12T23:47:35.008173589Z" level=info msg="Container f68663533035ac93821a07dee30d4b87aa184641b1d98f7ed2f9f6fe29635661: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:35.021573 containerd[1536]: time="2025-08-12T23:47:35.021533392Z" level=info msg="CreateContainer within sandbox \"dbe21870fbb125ea8ad55f30e6795a35ed6abbe22bdbdc64f7605ef4143e4e8c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f68663533035ac93821a07dee30d4b87aa184641b1d98f7ed2f9f6fe29635661\"" Aug 12 23:47:35.022359 containerd[1536]: time="2025-08-12T23:47:35.022318752Z" level=info msg="StartContainer for \"f68663533035ac93821a07dee30d4b87aa184641b1d98f7ed2f9f6fe29635661\"" Aug 12 23:47:35.024507 containerd[1536]: time="2025-08-12T23:47:35.024464433Z" level=info msg="connecting to shim f68663533035ac93821a07dee30d4b87aa184641b1d98f7ed2f9f6fe29635661" address="unix:///run/containerd/s/795de3432d9b18a17dd148bd860bd402c76366c33bd9af2e1fdfdbaedb537fb4" protocol=ttrpc version=3 Aug 12 23:47:35.064241 systemd[1]: Started cri-containerd-f68663533035ac93821a07dee30d4b87aa184641b1d98f7ed2f9f6fe29635661.scope - libcontainer container f68663533035ac93821a07dee30d4b87aa184641b1d98f7ed2f9f6fe29635661. Aug 12 23:47:35.100837 containerd[1536]: time="2025-08-12T23:47:35.100803890Z" level=info msg="StartContainer for \"f68663533035ac93821a07dee30d4b87aa184641b1d98f7ed2f9f6fe29635661\" returns successfully" Aug 12 23:47:35.510535 kubelet[2659]: I0812 23:47:35.510085 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5c454b4b85-xcz8q" podStartSLOduration=1.752130721 podStartE2EDuration="4.510050142s" podCreationTimestamp="2025-08-12 23:47:31 +0000 UTC" firstStartedPulling="2025-08-12 23:47:32.241271246 +0000 UTC m=+34.976800770" lastFinishedPulling="2025-08-12 23:47:34.999190667 +0000 UTC m=+37.734720191" observedRunningTime="2025-08-12 23:47:35.508690381 +0000 UTC m=+38.244219905" watchObservedRunningTime="2025-08-12 23:47:35.510050142 +0000 UTC m=+38.245579666" Aug 12 23:47:37.338705 containerd[1536]: time="2025-08-12T23:47:37.338652569Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lwfzw,Uid:b141f2df-c1f3-415f-9b35-005f841dcaa4,Namespace:kube-system,Attempt:0,}" Aug 12 23:47:37.450454 systemd-networkd[1439]: calia6135659bb8: Link UP Aug 12 23:47:37.451363 systemd-networkd[1439]: calia6135659bb8: Gained carrier Aug 12 23:47:37.468027 containerd[1536]: 2025-08-12 23:47:37.383 [INFO][4208] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0 coredns-668d6bf9bc- kube-system b141f2df-c1f3-415f-9b35-005f841dcaa4 836 0 2025-08-12 23:47:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-lwfzw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia6135659bb8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-lwfzw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lwfzw-" Aug 12 23:47:37.468027 containerd[1536]: 2025-08-12 23:47:37.383 [INFO][4208] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-lwfzw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0" Aug 12 23:47:37.468027 containerd[1536]: 2025-08-12 23:47:37.411 [INFO][4223] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" HandleID="k8s-pod-network.238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" Workload="localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0" Aug 12 23:47:37.468409 containerd[1536]: 2025-08-12 23:47:37.411 [INFO][4223] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" HandleID="k8s-pod-network.238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" Workload="localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004da10), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-lwfzw", "timestamp":"2025-08-12 23:47:37.411743703 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:47:37.468409 containerd[1536]: 2025-08-12 23:47:37.411 [INFO][4223] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:47:37.468409 containerd[1536]: 2025-08-12 23:47:37.412 [INFO][4223] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:47:37.468409 containerd[1536]: 2025-08-12 23:47:37.412 [INFO][4223] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:47:37.468409 containerd[1536]: 2025-08-12 23:47:37.421 [INFO][4223] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" host="localhost" Aug 12 23:47:37.468409 containerd[1536]: 2025-08-12 23:47:37.426 [INFO][4223] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:47:37.468409 containerd[1536]: 2025-08-12 23:47:37.431 [INFO][4223] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:47:37.468409 containerd[1536]: 2025-08-12 23:47:37.433 [INFO][4223] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:37.468409 containerd[1536]: 2025-08-12 23:47:37.435 [INFO][4223] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:37.468409 containerd[1536]: 2025-08-12 23:47:37.435 [INFO][4223] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" host="localhost" Aug 12 23:47:37.468629 containerd[1536]: 2025-08-12 23:47:37.436 [INFO][4223] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23 Aug 12 23:47:37.468629 containerd[1536]: 2025-08-12 23:47:37.439 [INFO][4223] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" host="localhost" Aug 12 23:47:37.468629 containerd[1536]: 2025-08-12 23:47:37.444 [INFO][4223] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" host="localhost" Aug 12 23:47:37.468629 containerd[1536]: 2025-08-12 23:47:37.444 [INFO][4223] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" host="localhost" Aug 12 23:47:37.468629 containerd[1536]: 2025-08-12 23:47:37.444 [INFO][4223] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:47:37.468629 containerd[1536]: 2025-08-12 23:47:37.444 [INFO][4223] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" HandleID="k8s-pod-network.238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" Workload="localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0" Aug 12 23:47:37.468849 containerd[1536]: 2025-08-12 23:47:37.447 [INFO][4208] cni-plugin/k8s.go 418: Populated endpoint ContainerID="238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-lwfzw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b141f2df-c1f3-415f-9b35-005f841dcaa4", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-lwfzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6135659bb8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:37.468972 containerd[1536]: 2025-08-12 23:47:37.447 [INFO][4208] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-lwfzw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0" Aug 12 23:47:37.468972 containerd[1536]: 2025-08-12 23:47:37.447 [INFO][4208] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia6135659bb8 ContainerID="238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-lwfzw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0" Aug 12 23:47:37.468972 containerd[1536]: 2025-08-12 23:47:37.451 [INFO][4208] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-lwfzw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0" Aug 12 23:47:37.469200 containerd[1536]: 2025-08-12 23:47:37.451 [INFO][4208] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-lwfzw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b141f2df-c1f3-415f-9b35-005f841dcaa4", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23", Pod:"coredns-668d6bf9bc-lwfzw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia6135659bb8", MAC:"c6:9b:be:df:c4:17", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:37.469200 containerd[1536]: 2025-08-12 23:47:37.464 [INFO][4208] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" Namespace="kube-system" Pod="coredns-668d6bf9bc-lwfzw" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--lwfzw-eth0" Aug 12 23:47:37.513028 containerd[1536]: time="2025-08-12T23:47:37.512983243Z" level=info msg="connecting to shim 238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23" address="unix:///run/containerd/s/00e4293091b8ef134ca4277e0a2575c1a60ce53cae41dc8c7759d9c4cbbba317" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:37.537299 systemd[1]: Started cri-containerd-238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23.scope - libcontainer container 238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23. Aug 12 23:47:37.548257 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:47:37.574387 containerd[1536]: time="2025-08-12T23:47:37.574283935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lwfzw,Uid:b141f2df-c1f3-415f-9b35-005f841dcaa4,Namespace:kube-system,Attempt:0,} returns sandbox id \"238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23\"" Aug 12 23:47:37.577012 containerd[1536]: time="2025-08-12T23:47:37.576961616Z" level=info msg="CreateContainer within sandbox \"238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:47:37.591962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2417506383.mount: Deactivated successfully. Aug 12 23:47:37.599710 containerd[1536]: time="2025-08-12T23:47:37.589241738Z" level=info msg="Container 562cec0879f919b99ecf7731e83014ef879197f6eed8b90cd876f6852a0bfe88: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:37.604713 containerd[1536]: time="2025-08-12T23:47:37.604584421Z" level=info msg="CreateContainer within sandbox \"238c93b54e8a2cc250cbc0ae2e3e5187ca14b80c5797eb70539fee5e4db44d23\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"562cec0879f919b99ecf7731e83014ef879197f6eed8b90cd876f6852a0bfe88\"" Aug 12 23:47:37.605757 containerd[1536]: time="2025-08-12T23:47:37.605718182Z" level=info msg="StartContainer for \"562cec0879f919b99ecf7731e83014ef879197f6eed8b90cd876f6852a0bfe88\"" Aug 12 23:47:37.607015 containerd[1536]: time="2025-08-12T23:47:37.606964502Z" level=info msg="connecting to shim 562cec0879f919b99ecf7731e83014ef879197f6eed8b90cd876f6852a0bfe88" address="unix:///run/containerd/s/00e4293091b8ef134ca4277e0a2575c1a60ce53cae41dc8c7759d9c4cbbba317" protocol=ttrpc version=3 Aug 12 23:47:37.637308 systemd[1]: Started cri-containerd-562cec0879f919b99ecf7731e83014ef879197f6eed8b90cd876f6852a0bfe88.scope - libcontainer container 562cec0879f919b99ecf7731e83014ef879197f6eed8b90cd876f6852a0bfe88. Aug 12 23:47:37.676675 containerd[1536]: time="2025-08-12T23:47:37.674113795Z" level=info msg="StartContainer for \"562cec0879f919b99ecf7731e83014ef879197f6eed8b90cd876f6852a0bfe88\" returns successfully" Aug 12 23:47:38.339532 containerd[1536]: time="2025-08-12T23:47:38.339471002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-v568g,Uid:f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50,Namespace:calico-system,Attempt:0,}" Aug 12 23:47:38.463104 systemd-networkd[1439]: calic9a6fc5e139: Link UP Aug 12 23:47:38.464183 systemd-networkd[1439]: calic9a6fc5e139: Gained carrier Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.401 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--v568g-eth0 goldmane-768f4c5c69- calico-system f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50 838 0 2025-08-12 23:47:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-v568g eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic9a6fc5e139 [] [] }} ContainerID="d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" Namespace="calico-system" Pod="goldmane-768f4c5c69-v568g" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--v568g-" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.401 [INFO][4332] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" Namespace="calico-system" Pod="goldmane-768f4c5c69-v568g" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--v568g-eth0" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.422 [INFO][4347] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" HandleID="k8s-pod-network.d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" Workload="localhost-k8s-goldmane--768f4c5c69--v568g-eth0" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.423 [INFO][4347] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" HandleID="k8s-pod-network.d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" Workload="localhost-k8s-goldmane--768f4c5c69--v568g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-v568g", "timestamp":"2025-08-12 23:47:38.422918417 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.423 [INFO][4347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.423 [INFO][4347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.423 [INFO][4347] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.431 [INFO][4347] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" host="localhost" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.437 [INFO][4347] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.441 [INFO][4347] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.442 [INFO][4347] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.444 [INFO][4347] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.445 [INFO][4347] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" host="localhost" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.446 [INFO][4347] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596 Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.449 [INFO][4347] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" host="localhost" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.455 [INFO][4347] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" host="localhost" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.455 [INFO][4347] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" host="localhost" Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.455 [INFO][4347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:47:38.477945 containerd[1536]: 2025-08-12 23:47:38.455 [INFO][4347] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" HandleID="k8s-pod-network.d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" Workload="localhost-k8s-goldmane--768f4c5c69--v568g-eth0" Aug 12 23:47:38.478536 containerd[1536]: 2025-08-12 23:47:38.457 [INFO][4332] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" Namespace="calico-system" Pod="goldmane-768f4c5c69-v568g" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--v568g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--v568g-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-v568g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9a6fc5e139", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:38.478536 containerd[1536]: 2025-08-12 23:47:38.457 [INFO][4332] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" Namespace="calico-system" Pod="goldmane-768f4c5c69-v568g" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--v568g-eth0" Aug 12 23:47:38.478536 containerd[1536]: 2025-08-12 23:47:38.457 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9a6fc5e139 ContainerID="d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" Namespace="calico-system" Pod="goldmane-768f4c5c69-v568g" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--v568g-eth0" Aug 12 23:47:38.478536 containerd[1536]: 2025-08-12 23:47:38.465 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" Namespace="calico-system" Pod="goldmane-768f4c5c69-v568g" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--v568g-eth0" Aug 12 23:47:38.478536 containerd[1536]: 2025-08-12 23:47:38.466 [INFO][4332] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" Namespace="calico-system" Pod="goldmane-768f4c5c69-v568g" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--v568g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--v568g-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596", Pod:"goldmane-768f4c5c69-v568g", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic9a6fc5e139", MAC:"0a:2d:37:98:1d:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:38.478536 containerd[1536]: 2025-08-12 23:47:38.475 [INFO][4332] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" Namespace="calico-system" Pod="goldmane-768f4c5c69-v568g" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--v568g-eth0" Aug 12 23:47:38.494542 containerd[1536]: time="2025-08-12T23:47:38.494504751Z" level=info msg="connecting to shim d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596" address="unix:///run/containerd/s/98e140410e79f5e1e591ef5977ad9fbf39ba79d59c7d784e09a76fce112771ec" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:38.523166 kubelet[2659]: I0812 23:47:38.523035 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lwfzw" podStartSLOduration=34.523019796 podStartE2EDuration="34.523019796s" podCreationTimestamp="2025-08-12 23:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:47:38.520911316 +0000 UTC m=+41.256440840" watchObservedRunningTime="2025-08-12 23:47:38.523019796 +0000 UTC m=+41.258549320" Aug 12 23:47:38.525287 systemd[1]: Started cri-containerd-d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596.scope - libcontainer container d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596. Aug 12 23:47:38.541557 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:47:38.569686 containerd[1536]: time="2025-08-12T23:47:38.569642565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-v568g,Uid:f55ce75f-fae0-4c9d-a6a9-d30c0ef88a50,Namespace:calico-system,Attempt:0,} returns sandbox id \"d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596\"" Aug 12 23:47:38.570939 containerd[1536]: time="2025-08-12T23:47:38.570915645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 12 23:47:39.339105 containerd[1536]: time="2025-08-12T23:47:39.339009983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pptww,Uid:f0d096b2-e96c-4424-850b-b88da8b049a1,Namespace:calico-system,Attempt:0,}" Aug 12 23:47:39.437535 systemd-networkd[1439]: calia6135659bb8: Gained IPv6LL Aug 12 23:47:39.460247 systemd-networkd[1439]: cali21f633f8f0b: Link UP Aug 12 23:47:39.460922 systemd-networkd[1439]: cali21f633f8f0b: Gained carrier Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.370 [INFO][4414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--pptww-eth0 csi-node-driver- calico-system f0d096b2-e96c-4424-850b-b88da8b049a1 733 0 2025-08-12 23:47:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-pptww eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali21f633f8f0b [] [] }} ContainerID="25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" Namespace="calico-system" Pod="csi-node-driver-pptww" WorkloadEndpoint="localhost-k8s-csi--node--driver--pptww-" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.370 [INFO][4414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" Namespace="calico-system" Pod="csi-node-driver-pptww" WorkloadEndpoint="localhost-k8s-csi--node--driver--pptww-eth0" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.392 [INFO][4430] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" HandleID="k8s-pod-network.25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" Workload="localhost-k8s-csi--node--driver--pptww-eth0" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.393 [INFO][4430] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" HandleID="k8s-pod-network.25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" Workload="localhost-k8s-csi--node--driver--pptww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000338050), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-pptww", "timestamp":"2025-08-12 23:47:39.392731872 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.393 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.393 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.393 [INFO][4430] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.405 [INFO][4430] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" host="localhost" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.421 [INFO][4430] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.431 [INFO][4430] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.434 [INFO][4430] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.440 [INFO][4430] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.440 [INFO][4430] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" host="localhost" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.444 [INFO][4430] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431 Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.450 [INFO][4430] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" host="localhost" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.456 [INFO][4430] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" host="localhost" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.456 [INFO][4430] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" host="localhost" Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.456 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:47:39.471866 containerd[1536]: 2025-08-12 23:47:39.456 [INFO][4430] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" HandleID="k8s-pod-network.25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" Workload="localhost-k8s-csi--node--driver--pptww-eth0" Aug 12 23:47:39.473383 containerd[1536]: 2025-08-12 23:47:39.458 [INFO][4414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" Namespace="calico-system" Pod="csi-node-driver-pptww" WorkloadEndpoint="localhost-k8s-csi--node--driver--pptww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pptww-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0d096b2-e96c-4424-850b-b88da8b049a1", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-pptww", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali21f633f8f0b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:39.473383 containerd[1536]: 2025-08-12 23:47:39.458 [INFO][4414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" Namespace="calico-system" Pod="csi-node-driver-pptww" WorkloadEndpoint="localhost-k8s-csi--node--driver--pptww-eth0" Aug 12 23:47:39.473383 containerd[1536]: 2025-08-12 23:47:39.458 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21f633f8f0b ContainerID="25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" Namespace="calico-system" Pod="csi-node-driver-pptww" WorkloadEndpoint="localhost-k8s-csi--node--driver--pptww-eth0" Aug 12 23:47:39.473383 containerd[1536]: 2025-08-12 23:47:39.460 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" Namespace="calico-system" Pod="csi-node-driver-pptww" WorkloadEndpoint="localhost-k8s-csi--node--driver--pptww-eth0" Aug 12 23:47:39.473383 containerd[1536]: 2025-08-12 23:47:39.460 [INFO][4414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" Namespace="calico-system" Pod="csi-node-driver-pptww" WorkloadEndpoint="localhost-k8s-csi--node--driver--pptww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--pptww-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0d096b2-e96c-4424-850b-b88da8b049a1", ResourceVersion:"733", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431", Pod:"csi-node-driver-pptww", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali21f633f8f0b", MAC:"16:b0:cb:37:98:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:39.473383 containerd[1536]: 2025-08-12 23:47:39.469 [INFO][4414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" Namespace="calico-system" Pod="csi-node-driver-pptww" WorkloadEndpoint="localhost-k8s-csi--node--driver--pptww-eth0" Aug 12 23:47:39.490134 containerd[1536]: time="2025-08-12T23:47:39.489439129Z" level=info msg="connecting to shim 25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431" address="unix:///run/containerd/s/c893d3b969f95b06ac11177c750cd3a960d81a6e13a34f67c27ee047c54d27b4" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:39.510988 systemd[1]: Started sshd@7-10.0.0.67:22-10.0.0.1:46654.service - OpenSSH per-connection server daemon (10.0.0.1:46654). Aug 12 23:47:39.521993 systemd[1]: Started cri-containerd-25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431.scope - libcontainer container 25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431. Aug 12 23:47:39.534250 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:47:39.549376 containerd[1536]: time="2025-08-12T23:47:39.549254739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pptww,Uid:f0d096b2-e96c-4424-850b-b88da8b049a1,Namespace:calico-system,Attempt:0,} returns sandbox id \"25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431\"" Aug 12 23:47:39.572652 sshd[4482]: Accepted publickey for core from 10.0.0.1 port 46654 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:47:39.574100 sshd-session[4482]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:39.578066 systemd-logind[1520]: New session 8 of user core. Aug 12 23:47:39.585228 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 12 23:47:39.829803 sshd[4498]: Connection closed by 10.0.0.1 port 46654 Aug 12 23:47:39.830321 sshd-session[4482]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:39.837936 systemd[1]: sshd@7-10.0.0.67:22-10.0.0.1:46654.service: Deactivated successfully. Aug 12 23:47:39.840489 systemd[1]: session-8.scope: Deactivated successfully. Aug 12 23:47:39.841449 systemd-logind[1520]: Session 8 logged out. Waiting for processes to exit. Aug 12 23:47:39.845624 systemd-logind[1520]: Removed session 8. Aug 12 23:47:40.163716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424489817.mount: Deactivated successfully. Aug 12 23:47:40.338887 containerd[1536]: time="2025-08-12T23:47:40.338847633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dc884bfc-bjlzc,Uid:3b9ed652-c70b-48f0-b332-952d960c0797,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:47:40.461922 systemd-networkd[1439]: calic9a6fc5e139: Gained IPv6LL Aug 12 23:47:40.464281 systemd-networkd[1439]: calif896fd3ee81: Link UP Aug 12 23:47:40.465263 systemd-networkd[1439]: calif896fd3ee81: Gained carrier Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.384 [INFO][4520] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0 calico-apiserver-65dc884bfc- calico-apiserver 3b9ed652-c70b-48f0-b332-952d960c0797 837 0 2025-08-12 23:47:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65dc884bfc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65dc884bfc-bjlzc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calif896fd3ee81 [] [] }} ContainerID="71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-bjlzc" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.384 [INFO][4520] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-bjlzc" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.418 [INFO][4534] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" HandleID="k8s-pod-network.71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" Workload="localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.419 [INFO][4534] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" HandleID="k8s-pod-network.71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" Workload="localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000338360), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65dc884bfc-bjlzc", "timestamp":"2025-08-12 23:47:40.418704526 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.419 [INFO][4534] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.419 [INFO][4534] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.419 [INFO][4534] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.429 [INFO][4534] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" host="localhost" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.434 [INFO][4534] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.438 [INFO][4534] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.442 [INFO][4534] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.444 [INFO][4534] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.444 [INFO][4534] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" host="localhost" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.446 [INFO][4534] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.450 [INFO][4534] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" host="localhost" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.455 [INFO][4534] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" host="localhost" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.455 [INFO][4534] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" host="localhost" Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.455 [INFO][4534] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:47:40.483202 containerd[1536]: 2025-08-12 23:47:40.456 [INFO][4534] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" HandleID="k8s-pod-network.71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" Workload="localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0" Aug 12 23:47:40.483893 containerd[1536]: 2025-08-12 23:47:40.461 [INFO][4520] cni-plugin/k8s.go 418: Populated endpoint ContainerID="71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-bjlzc" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0", GenerateName:"calico-apiserver-65dc884bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b9ed652-c70b-48f0-b332-952d960c0797", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dc884bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65dc884bfc-bjlzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif896fd3ee81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:40.483893 containerd[1536]: 2025-08-12 23:47:40.461 [INFO][4520] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-bjlzc" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0" Aug 12 23:47:40.483893 containerd[1536]: 2025-08-12 23:47:40.461 [INFO][4520] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif896fd3ee81 ContainerID="71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-bjlzc" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0" Aug 12 23:47:40.483893 containerd[1536]: 2025-08-12 23:47:40.467 [INFO][4520] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-bjlzc" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0" Aug 12 23:47:40.483893 containerd[1536]: 2025-08-12 23:47:40.467 [INFO][4520] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-bjlzc" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0", GenerateName:"calico-apiserver-65dc884bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"3b9ed652-c70b-48f0-b332-952d960c0797", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dc884bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea", Pod:"calico-apiserver-65dc884bfc-bjlzc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calif896fd3ee81", MAC:"4a:d4:3f:4b:c8:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:40.483893 containerd[1536]: 2025-08-12 23:47:40.477 [INFO][4520] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-bjlzc" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--bjlzc-eth0" Aug 12 23:47:40.509460 containerd[1536]: time="2025-08-12T23:47:40.509413460Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Aug 12 23:47:40.509984 containerd[1536]: time="2025-08-12T23:47:40.509941700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:40.512979 containerd[1536]: time="2025-08-12T23:47:40.512941421Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:40.514565 containerd[1536]: time="2025-08-12T23:47:40.514445141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 1.943499336s" Aug 12 23:47:40.514565 containerd[1536]: time="2025-08-12T23:47:40.514482221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Aug 12 23:47:40.515907 containerd[1536]: time="2025-08-12T23:47:40.515859261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:40.519408 containerd[1536]: time="2025-08-12T23:47:40.518663102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 12 23:47:40.520301 containerd[1536]: time="2025-08-12T23:47:40.520267902Z" level=info msg="CreateContainer within sandbox \"d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 12 23:47:40.520692 containerd[1536]: time="2025-08-12T23:47:40.520662462Z" level=info msg="connecting to shim 71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea" address="unix:///run/containerd/s/07cfa3b77cbfa9307ce997b77b8714f4f145f4cd62cf62da81827c80ca978b38" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:40.533727 containerd[1536]: time="2025-08-12T23:47:40.533351424Z" level=info msg="Container 742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:40.535556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2940919872.mount: Deactivated successfully. Aug 12 23:47:40.542990 containerd[1536]: time="2025-08-12T23:47:40.542935826Z" level=info msg="CreateContainer within sandbox \"d339a141bad9b14ef458f545074af05e6f98accbff454e0968dd1f748f172596\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68\"" Aug 12 23:47:40.543419 containerd[1536]: time="2025-08-12T23:47:40.543392266Z" level=info msg="StartContainer for \"742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68\"" Aug 12 23:47:40.544814 containerd[1536]: time="2025-08-12T23:47:40.544781546Z" level=info msg="connecting to shim 742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68" address="unix:///run/containerd/s/98e140410e79f5e1e591ef5977ad9fbf39ba79d59c7d784e09a76fce112771ec" protocol=ttrpc version=3 Aug 12 23:47:40.548318 systemd[1]: Started cri-containerd-71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea.scope - libcontainer container 71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea. Aug 12 23:47:40.570250 systemd[1]: Started cri-containerd-742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68.scope - libcontainer container 742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68. Aug 12 23:47:40.573723 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:47:40.589799 systemd-networkd[1439]: cali21f633f8f0b: Gained IPv6LL Aug 12 23:47:40.603051 containerd[1536]: time="2025-08-12T23:47:40.603005876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dc884bfc-bjlzc,Uid:3b9ed652-c70b-48f0-b332-952d960c0797,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea\"" Aug 12 23:47:40.622671 containerd[1536]: time="2025-08-12T23:47:40.622638719Z" level=info msg="StartContainer for \"742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68\" returns successfully" Aug 12 23:47:41.339116 containerd[1536]: time="2025-08-12T23:47:41.339044472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dc884bfc-plshj,Uid:4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f,Namespace:calico-apiserver,Attempt:0,}" Aug 12 23:47:41.339258 containerd[1536]: time="2025-08-12T23:47:41.339231912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6857dbbf66-llfb4,Uid:26eaddaa-bc86-4fd0-be2f-2ee51dea7bee,Namespace:calico-system,Attempt:0,}" Aug 12 23:47:41.450159 systemd-networkd[1439]: cali801c1fa4a19: Link UP Aug 12 23:47:41.450575 systemd-networkd[1439]: cali801c1fa4a19: Gained carrier Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.380 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0 calico-apiserver-65dc884bfc- calico-apiserver 4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f 826 0 2025-08-12 23:47:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:65dc884bfc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-65dc884bfc-plshj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali801c1fa4a19 [] [] }} ContainerID="9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-plshj" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--plshj-" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.380 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-plshj" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.404 [INFO][4661] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" HandleID="k8s-pod-network.9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" Workload="localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.404 [INFO][4661] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" HandleID="k8s-pod-network.9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" Workload="localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001375c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-65dc884bfc-plshj", "timestamp":"2025-08-12 23:47:41.404559682 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.404 [INFO][4661] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.404 [INFO][4661] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.404 [INFO][4661] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.413 [INFO][4661] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" host="localhost" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.417 [INFO][4661] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.421 [INFO][4661] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.422 [INFO][4661] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.425 [INFO][4661] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.425 [INFO][4661] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" host="localhost" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.426 [INFO][4661] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.430 [INFO][4661] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" host="localhost" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.444 [INFO][4661] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" host="localhost" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.444 [INFO][4661] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" host="localhost" Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.444 [INFO][4661] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:47:41.468832 containerd[1536]: 2025-08-12 23:47:41.444 [INFO][4661] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" HandleID="k8s-pod-network.9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" Workload="localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0" Aug 12 23:47:41.469393 containerd[1536]: 2025-08-12 23:47:41.447 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-plshj" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0", GenerateName:"calico-apiserver-65dc884bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dc884bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-65dc884bfc-plshj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali801c1fa4a19", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:41.469393 containerd[1536]: 2025-08-12 23:47:41.447 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-plshj" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0" Aug 12 23:47:41.469393 containerd[1536]: 2025-08-12 23:47:41.447 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali801c1fa4a19 ContainerID="9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-plshj" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0" Aug 12 23:47:41.469393 containerd[1536]: 2025-08-12 23:47:41.451 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-plshj" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0" Aug 12 23:47:41.469393 containerd[1536]: 2025-08-12 23:47:41.451 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-plshj" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0", GenerateName:"calico-apiserver-65dc884bfc-", Namespace:"calico-apiserver", SelfLink:"", UID:"4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f", ResourceVersion:"826", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"65dc884bfc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e", Pod:"calico-apiserver-65dc884bfc-plshj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali801c1fa4a19", MAC:"0e:af:b3:a0:c0:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:41.469393 containerd[1536]: 2025-08-12 23:47:41.464 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" Namespace="calico-apiserver" Pod="calico-apiserver-65dc884bfc-plshj" WorkloadEndpoint="localhost-k8s-calico--apiserver--65dc884bfc--plshj-eth0" Aug 12 23:47:41.485192 containerd[1536]: time="2025-08-12T23:47:41.485018454Z" level=info msg="connecting to shim 9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e" address="unix:///run/containerd/s/56d189dcf67b0e9d3908eb135762dc91fe512f2f9cedf753a3d52d9cd9c1351f" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:41.513247 systemd[1]: Started cri-containerd-9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e.scope - libcontainer container 9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e. Aug 12 23:47:41.528208 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:47:41.546196 kubelet[2659]: I0812 23:47:41.546124 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-v568g" podStartSLOduration=20.599853247 podStartE2EDuration="22.545452583s" podCreationTimestamp="2025-08-12 23:47:19 +0000 UTC" firstStartedPulling="2025-08-12 23:47:38.570724205 +0000 UTC m=+41.306253729" lastFinishedPulling="2025-08-12 23:47:40.516323541 +0000 UTC m=+43.251853065" observedRunningTime="2025-08-12 23:47:41.542805903 +0000 UTC m=+44.278335427" watchObservedRunningTime="2025-08-12 23:47:41.545452583 +0000 UTC m=+44.280982147" Aug 12 23:47:41.571524 systemd-networkd[1439]: calia442babe0c2: Link UP Aug 12 23:47:41.573846 systemd-networkd[1439]: calia442babe0c2: Gained carrier Aug 12 23:47:41.580255 containerd[1536]: time="2025-08-12T23:47:41.580181108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-65dc884bfc-plshj,Uid:4a1b2d49-cbd2-4a96-bae1-a97cd4c7ed3f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e\"" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.381 [INFO][4644] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0 calico-kube-controllers-6857dbbf66- calico-system 26eaddaa-bc86-4fd0-be2f-2ee51dea7bee 833 0 2025-08-12 23:47:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6857dbbf66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6857dbbf66-llfb4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia442babe0c2 [] [] }} ContainerID="c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" Namespace="calico-system" Pod="calico-kube-controllers-6857dbbf66-llfb4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.381 [INFO][4644] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" Namespace="calico-system" Pod="calico-kube-controllers-6857dbbf66-llfb4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.408 [INFO][4667] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" HandleID="k8s-pod-network.c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" Workload="localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.408 [INFO][4667] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" HandleID="k8s-pod-network.c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" Workload="localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ca230), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6857dbbf66-llfb4", "timestamp":"2025-08-12 23:47:41.408338922 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.408 [INFO][4667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.444 [INFO][4667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.445 [INFO][4667] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.514 [INFO][4667] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" host="localhost" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.520 [INFO][4667] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.525 [INFO][4667] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.528 [INFO][4667] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.533 [INFO][4667] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.535 [INFO][4667] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" host="localhost" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.536 [INFO][4667] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45 Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.540 [INFO][4667] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" host="localhost" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.549 [INFO][4667] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" host="localhost" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.549 [INFO][4667] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" host="localhost" Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.549 [INFO][4667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:47:41.597253 containerd[1536]: 2025-08-12 23:47:41.550 [INFO][4667] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" HandleID="k8s-pod-network.c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" Workload="localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0" Aug 12 23:47:41.599896 containerd[1536]: 2025-08-12 23:47:41.560 [INFO][4644] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" Namespace="calico-system" Pod="calico-kube-controllers-6857dbbf66-llfb4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0", GenerateName:"calico-kube-controllers-6857dbbf66-", Namespace:"calico-system", SelfLink:"", UID:"26eaddaa-bc86-4fd0-be2f-2ee51dea7bee", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6857dbbf66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6857dbbf66-llfb4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia442babe0c2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:41.599896 containerd[1536]: 2025-08-12 23:47:41.560 [INFO][4644] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" Namespace="calico-system" Pod="calico-kube-controllers-6857dbbf66-llfb4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0" Aug 12 23:47:41.599896 containerd[1536]: 2025-08-12 23:47:41.560 [INFO][4644] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia442babe0c2 ContainerID="c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" Namespace="calico-system" Pod="calico-kube-controllers-6857dbbf66-llfb4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0" Aug 12 23:47:41.599896 containerd[1536]: 2025-08-12 23:47:41.574 [INFO][4644] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" Namespace="calico-system" Pod="calico-kube-controllers-6857dbbf66-llfb4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0" Aug 12 23:47:41.599896 containerd[1536]: 2025-08-12 23:47:41.577 [INFO][4644] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" Namespace="calico-system" Pod="calico-kube-controllers-6857dbbf66-llfb4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0", GenerateName:"calico-kube-controllers-6857dbbf66-", Namespace:"calico-system", SelfLink:"", UID:"26eaddaa-bc86-4fd0-be2f-2ee51dea7bee", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6857dbbf66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45", Pod:"calico-kube-controllers-6857dbbf66-llfb4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia442babe0c2", MAC:"3a:94:fe:a6:e0:9b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:41.599896 containerd[1536]: 2025-08-12 23:47:41.592 [INFO][4644] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" Namespace="calico-system" Pod="calico-kube-controllers-6857dbbf66-llfb4" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6857dbbf66--llfb4-eth0" Aug 12 23:47:41.682215 containerd[1536]: time="2025-08-12T23:47:41.682147484Z" level=info msg="connecting to shim c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45" address="unix:///run/containerd/s/c234c043d31d33e5ff4571ad1f980461e8820c2305f03fdfd1e6372a246bc451" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:41.698408 containerd[1536]: time="2025-08-12T23:47:41.697418326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68\" id:\"6abfc897f3656483bdd0cb1e1de5b9456d5bee6c919d5886270aba72b1b936aa\" pid:4752 exit_status:1 exited_at:{seconds:1755042461 nanos:682374724}" Aug 12 23:47:41.704296 systemd[1]: Started cri-containerd-c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45.scope - libcontainer container c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45. Aug 12 23:47:41.719475 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:47:41.742192 containerd[1536]: time="2025-08-12T23:47:41.742157453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6857dbbf66-llfb4,Uid:26eaddaa-bc86-4fd0-be2f-2ee51dea7bee,Namespace:calico-system,Attempt:0,} returns sandbox id \"c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45\"" Aug 12 23:47:41.782563 containerd[1536]: time="2025-08-12T23:47:41.782504059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:41.782924 containerd[1536]: time="2025-08-12T23:47:41.782892739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Aug 12 23:47:41.783773 containerd[1536]: time="2025-08-12T23:47:41.783746539Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:41.785968 containerd[1536]: time="2025-08-12T23:47:41.785930300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:41.786512 containerd[1536]: time="2025-08-12T23:47:41.786477540Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.267107238s" Aug 12 23:47:41.786547 containerd[1536]: time="2025-08-12T23:47:41.786509820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Aug 12 23:47:41.788010 containerd[1536]: time="2025-08-12T23:47:41.787428820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:47:41.793354 containerd[1536]: time="2025-08-12T23:47:41.793313861Z" level=info msg="CreateContainer within sandbox \"25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 12 23:47:41.799700 containerd[1536]: time="2025-08-12T23:47:41.799670622Z" level=info msg="Container cc33caec2790e0b9fced03f556e2a8b6c89ec9c3d13e725ca7a71b0ad21b3075: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:41.807058 containerd[1536]: time="2025-08-12T23:47:41.807015103Z" level=info msg="CreateContainer within sandbox \"25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cc33caec2790e0b9fced03f556e2a8b6c89ec9c3d13e725ca7a71b0ad21b3075\"" Aug 12 23:47:41.807738 containerd[1536]: time="2025-08-12T23:47:41.807701303Z" level=info msg="StartContainer for \"cc33caec2790e0b9fced03f556e2a8b6c89ec9c3d13e725ca7a71b0ad21b3075\"" Aug 12 23:47:41.809719 containerd[1536]: time="2025-08-12T23:47:41.809660703Z" level=info msg="connecting to shim cc33caec2790e0b9fced03f556e2a8b6c89ec9c3d13e725ca7a71b0ad21b3075" address="unix:///run/containerd/s/c893d3b969f95b06ac11177c750cd3a960d81a6e13a34f67c27ee047c54d27b4" protocol=ttrpc version=3 Aug 12 23:47:41.833234 systemd[1]: Started cri-containerd-cc33caec2790e0b9fced03f556e2a8b6c89ec9c3d13e725ca7a71b0ad21b3075.scope - libcontainer container cc33caec2790e0b9fced03f556e2a8b6c89ec9c3d13e725ca7a71b0ad21b3075. Aug 12 23:47:41.873748 containerd[1536]: time="2025-08-12T23:47:41.873430113Z" level=info msg="StartContainer for \"cc33caec2790e0b9fced03f556e2a8b6c89ec9c3d13e725ca7a71b0ad21b3075\" returns successfully" Aug 12 23:47:42.317301 systemd-networkd[1439]: calif896fd3ee81: Gained IPv6LL Aug 12 23:47:42.338627 containerd[1536]: time="2025-08-12T23:47:42.338589501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58jfc,Uid:85871963-4597-475f-bb66-8481545d63bc,Namespace:kube-system,Attempt:0,}" Aug 12 23:47:42.431021 systemd-networkd[1439]: cali3915fdb17be: Link UP Aug 12 23:47:42.431393 systemd-networkd[1439]: cali3915fdb17be: Gained carrier Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.371 [INFO][4850] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--58jfc-eth0 coredns-668d6bf9bc- kube-system 85871963-4597-475f-bb66-8481545d63bc 835 0 2025-08-12 23:47:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-58jfc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3915fdb17be [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-58jfc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58jfc-" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.371 [INFO][4850] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-58jfc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58jfc-eth0" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.394 [INFO][4864] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" HandleID="k8s-pod-network.8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" Workload="localhost-k8s-coredns--668d6bf9bc--58jfc-eth0" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.394 [INFO][4864] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" HandleID="k8s-pod-network.8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" Workload="localhost-k8s-coredns--668d6bf9bc--58jfc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137740), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-58jfc", "timestamp":"2025-08-12 23:47:42.394692149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.394 [INFO][4864] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.394 [INFO][4864] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.394 [INFO][4864] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.403 [INFO][4864] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" host="localhost" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.407 [INFO][4864] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.411 [INFO][4864] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.412 [INFO][4864] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.415 [INFO][4864] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.415 [INFO][4864] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" host="localhost" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.417 [INFO][4864] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.420 [INFO][4864] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" host="localhost" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.426 [INFO][4864] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" host="localhost" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.426 [INFO][4864] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" host="localhost" Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.426 [INFO][4864] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 12 23:47:42.444833 containerd[1536]: 2025-08-12 23:47:42.426 [INFO][4864] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" HandleID="k8s-pod-network.8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" Workload="localhost-k8s-coredns--668d6bf9bc--58jfc-eth0" Aug 12 23:47:42.445462 containerd[1536]: 2025-08-12 23:47:42.428 [INFO][4850] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-58jfc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58jfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--58jfc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"85871963-4597-475f-bb66-8481545d63bc", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-58jfc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3915fdb17be", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:42.445462 containerd[1536]: 2025-08-12 23:47:42.429 [INFO][4850] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-58jfc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58jfc-eth0" Aug 12 23:47:42.445462 containerd[1536]: 2025-08-12 23:47:42.429 [INFO][4850] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3915fdb17be ContainerID="8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-58jfc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58jfc-eth0" Aug 12 23:47:42.445462 containerd[1536]: 2025-08-12 23:47:42.431 [INFO][4850] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-58jfc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58jfc-eth0" Aug 12 23:47:42.445462 containerd[1536]: 2025-08-12 23:47:42.432 [INFO][4850] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-58jfc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58jfc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--58jfc-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"85871963-4597-475f-bb66-8481545d63bc", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.August, 12, 23, 47, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a", Pod:"coredns-668d6bf9bc-58jfc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3915fdb17be", MAC:"6a:3e:72:37:c6:c7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 12 23:47:42.445462 containerd[1536]: 2025-08-12 23:47:42.441 [INFO][4850] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" Namespace="kube-system" Pod="coredns-668d6bf9bc-58jfc" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--58jfc-eth0" Aug 12 23:47:42.470609 containerd[1536]: time="2025-08-12T23:47:42.470559560Z" level=info msg="connecting to shim 8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a" address="unix:///run/containerd/s/5586c06f262f8da3bdb858d72276299c5af205d070decd7d8d024987e3294688" namespace=k8s.io protocol=ttrpc version=3 Aug 12 23:47:42.495234 systemd[1]: Started cri-containerd-8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a.scope - libcontainer container 8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a. Aug 12 23:47:42.506056 systemd-resolved[1356]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 12 23:47:42.527202 containerd[1536]: time="2025-08-12T23:47:42.527149008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-58jfc,Uid:85871963-4597-475f-bb66-8481545d63bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a\"" Aug 12 23:47:42.530223 containerd[1536]: time="2025-08-12T23:47:42.530194288Z" level=info msg="CreateContainer within sandbox \"8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 12 23:47:42.540751 containerd[1536]: time="2025-08-12T23:47:42.540718170Z" level=info msg="Container 8e04170a79af47f3c05c233966472a7b814d59f3227e54f3d2bf29bb06fa4011: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:42.548921 containerd[1536]: time="2025-08-12T23:47:42.548813811Z" level=info msg="CreateContainer within sandbox \"8235568da60ec5e2dbe380e17dc2cf7f9e31c019c4dc9a7469e2112014746a2a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8e04170a79af47f3c05c233966472a7b814d59f3227e54f3d2bf29bb06fa4011\"" Aug 12 23:47:42.549502 containerd[1536]: time="2025-08-12T23:47:42.549306131Z" level=info msg="StartContainer for \"8e04170a79af47f3c05c233966472a7b814d59f3227e54f3d2bf29bb06fa4011\"" Aug 12 23:47:42.551759 containerd[1536]: time="2025-08-12T23:47:42.551730171Z" level=info msg="connecting to shim 8e04170a79af47f3c05c233966472a7b814d59f3227e54f3d2bf29bb06fa4011" address="unix:///run/containerd/s/5586c06f262f8da3bdb858d72276299c5af205d070decd7d8d024987e3294688" protocol=ttrpc version=3 Aug 12 23:47:42.569687 systemd[1]: Started cri-containerd-8e04170a79af47f3c05c233966472a7b814d59f3227e54f3d2bf29bb06fa4011.scope - libcontainer container 8e04170a79af47f3c05c233966472a7b814d59f3227e54f3d2bf29bb06fa4011. Aug 12 23:47:42.603039 containerd[1536]: time="2025-08-12T23:47:42.603006539Z" level=info msg="StartContainer for \"8e04170a79af47f3c05c233966472a7b814d59f3227e54f3d2bf29bb06fa4011\" returns successfully" Aug 12 23:47:42.627420 containerd[1536]: time="2025-08-12T23:47:42.627378822Z" level=info msg="TaskExit event in podsandbox handler container_id:\"742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68\" id:\"d9b77f53fc21886a6eca1bbcc6372517fcca4b43b5bd987c0591077c34d6b6c2\" pid:4947 exit_status:1 exited_at:{seconds:1755042462 nanos:627004142}" Aug 12 23:47:42.701404 systemd-networkd[1439]: cali801c1fa4a19: Gained IPv6LL Aug 12 23:47:43.342232 systemd-networkd[1439]: calia442babe0c2: Gained IPv6LL Aug 12 23:47:43.574106 kubelet[2659]: I0812 23:47:43.573982 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-58jfc" podStartSLOduration=39.573965192 podStartE2EDuration="39.573965192s" podCreationTimestamp="2025-08-12 23:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-12 23:47:43.572969112 +0000 UTC m=+46.308498596" watchObservedRunningTime="2025-08-12 23:47:43.573965192 +0000 UTC m=+46.309494716" Aug 12 23:47:43.612162 containerd[1536]: time="2025-08-12T23:47:43.611849037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:43.612959 containerd[1536]: time="2025-08-12T23:47:43.612724197Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Aug 12 23:47:43.616654 containerd[1536]: time="2025-08-12T23:47:43.615103878Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:43.618378 containerd[1536]: time="2025-08-12T23:47:43.618339118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:43.619699 containerd[1536]: time="2025-08-12T23:47:43.618991318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.831527618s" Aug 12 23:47:43.619699 containerd[1536]: time="2025-08-12T23:47:43.619026718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:47:43.621416 containerd[1536]: time="2025-08-12T23:47:43.621376598Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 12 23:47:43.623012 containerd[1536]: time="2025-08-12T23:47:43.622972119Z" level=info msg="CreateContainer within sandbox \"71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:47:43.633863 containerd[1536]: time="2025-08-12T23:47:43.629559880Z" level=info msg="Container 7b205c688e3d34f1a233dae912d6f3c528ae6370f228b62718624ee0ff5adc67: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:43.635906 containerd[1536]: time="2025-08-12T23:47:43.635864640Z" level=info msg="CreateContainer within sandbox \"71bf9b95a64cd2fdd4e795e7f0be635918e52f90b1f28989298555c76ee0faea\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7b205c688e3d34f1a233dae912d6f3c528ae6370f228b62718624ee0ff5adc67\"" Aug 12 23:47:43.636521 containerd[1536]: time="2025-08-12T23:47:43.636485600Z" level=info msg="StartContainer for \"7b205c688e3d34f1a233dae912d6f3c528ae6370f228b62718624ee0ff5adc67\"" Aug 12 23:47:43.637824 containerd[1536]: time="2025-08-12T23:47:43.637790961Z" level=info msg="connecting to shim 7b205c688e3d34f1a233dae912d6f3c528ae6370f228b62718624ee0ff5adc67" address="unix:///run/containerd/s/07cfa3b77cbfa9307ce997b77b8714f4f145f4cd62cf62da81827c80ca978b38" protocol=ttrpc version=3 Aug 12 23:47:43.649433 containerd[1536]: time="2025-08-12T23:47:43.649401282Z" level=info msg="TaskExit event in podsandbox handler container_id:\"742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68\" id:\"afde89899e71a51c4c427e0d443078fdd58fff31c0dfdbbb36fc0bc34693c4bb\" pid:5007 exit_status:1 exited_at:{seconds:1755042463 nanos:649144322}" Aug 12 23:47:43.673468 systemd[1]: Started cri-containerd-7b205c688e3d34f1a233dae912d6f3c528ae6370f228b62718624ee0ff5adc67.scope - libcontainer container 7b205c688e3d34f1a233dae912d6f3c528ae6370f228b62718624ee0ff5adc67. Aug 12 23:47:43.704510 containerd[1536]: time="2025-08-12T23:47:43.704468410Z" level=info msg="StartContainer for \"7b205c688e3d34f1a233dae912d6f3c528ae6370f228b62718624ee0ff5adc67\" returns successfully" Aug 12 23:47:43.911693 containerd[1536]: time="2025-08-12T23:47:43.911640037Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:43.911807 containerd[1536]: time="2025-08-12T23:47:43.911700557Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 12 23:47:43.913293 containerd[1536]: time="2025-08-12T23:47:43.912955597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 291.541439ms" Aug 12 23:47:43.913293 containerd[1536]: time="2025-08-12T23:47:43.912987397Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Aug 12 23:47:43.914248 containerd[1536]: time="2025-08-12T23:47:43.914214518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 12 23:47:43.915848 containerd[1536]: time="2025-08-12T23:47:43.915803758Z" level=info msg="CreateContainer within sandbox \"9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 12 23:47:43.929409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2606685423.mount: Deactivated successfully. Aug 12 23:47:43.930895 containerd[1536]: time="2025-08-12T23:47:43.930191640Z" level=info msg="Container 6fad4fbc1c345207c5bcf143670680fa3845071d544d382d18089e1f8c5bb375: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:43.936295 containerd[1536]: time="2025-08-12T23:47:43.936255681Z" level=info msg="CreateContainer within sandbox \"9bc5380d585f95670217fb9f02d8d8bee04706909011323e43fd0ecf74099a9e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6fad4fbc1c345207c5bcf143670680fa3845071d544d382d18089e1f8c5bb375\"" Aug 12 23:47:43.937253 containerd[1536]: time="2025-08-12T23:47:43.937223881Z" level=info msg="StartContainer for \"6fad4fbc1c345207c5bcf143670680fa3845071d544d382d18089e1f8c5bb375\"" Aug 12 23:47:43.939683 containerd[1536]: time="2025-08-12T23:47:43.939657041Z" level=info msg="connecting to shim 6fad4fbc1c345207c5bcf143670680fa3845071d544d382d18089e1f8c5bb375" address="unix:///run/containerd/s/56d189dcf67b0e9d3908eb135762dc91fe512f2f9cedf753a3d52d9cd9c1351f" protocol=ttrpc version=3 Aug 12 23:47:43.960309 systemd[1]: Started cri-containerd-6fad4fbc1c345207c5bcf143670680fa3845071d544d382d18089e1f8c5bb375.scope - libcontainer container 6fad4fbc1c345207c5bcf143670680fa3845071d544d382d18089e1f8c5bb375. Aug 12 23:47:44.007233 containerd[1536]: time="2025-08-12T23:47:44.007196970Z" level=info msg="StartContainer for \"6fad4fbc1c345207c5bcf143670680fa3845071d544d382d18089e1f8c5bb375\" returns successfully" Aug 12 23:47:44.109375 systemd-networkd[1439]: cali3915fdb17be: Gained IPv6LL Aug 12 23:47:44.581039 kubelet[2659]: I0812 23:47:44.580968 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65dc884bfc-bjlzc" podStartSLOduration=26.56561244 podStartE2EDuration="29.580949282s" podCreationTimestamp="2025-08-12 23:47:15 +0000 UTC" firstStartedPulling="2025-08-12 23:47:40.605518156 +0000 UTC m=+43.341047680" lastFinishedPulling="2025-08-12 23:47:43.620854958 +0000 UTC m=+46.356384522" observedRunningTime="2025-08-12 23:47:44.5669822 +0000 UTC m=+47.302511764" watchObservedRunningTime="2025-08-12 23:47:44.580949282 +0000 UTC m=+47.316478806" Aug 12 23:47:44.582376 kubelet[2659]: I0812 23:47:44.582311 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-65dc884bfc-plshj" podStartSLOduration=27.251827753 podStartE2EDuration="29.582300082s" podCreationTimestamp="2025-08-12 23:47:15 +0000 UTC" firstStartedPulling="2025-08-12 23:47:41.583571189 +0000 UTC m=+44.319100713" lastFinishedPulling="2025-08-12 23:47:43.914043518 +0000 UTC m=+46.649573042" observedRunningTime="2025-08-12 23:47:44.580055562 +0000 UTC m=+47.315585086" watchObservedRunningTime="2025-08-12 23:47:44.582300082 +0000 UTC m=+47.317829606" Aug 12 23:47:44.848810 systemd[1]: Started sshd@8-10.0.0.67:22-10.0.0.1:44468.service - OpenSSH per-connection server daemon (10.0.0.1:44468). Aug 12 23:47:44.927866 sshd[5104]: Accepted publickey for core from 10.0.0.1 port 44468 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:47:44.929614 sshd-session[5104]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:44.935896 systemd-logind[1520]: New session 9 of user core. Aug 12 23:47:44.940732 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 12 23:47:45.158163 sshd[5106]: Connection closed by 10.0.0.1 port 44468 Aug 12 23:47:45.159587 sshd-session[5104]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:45.164519 systemd[1]: sshd@8-10.0.0.67:22-10.0.0.1:44468.service: Deactivated successfully. Aug 12 23:47:45.167960 systemd[1]: session-9.scope: Deactivated successfully. Aug 12 23:47:45.172275 systemd-logind[1520]: Session 9 logged out. Waiting for processes to exit. Aug 12 23:47:45.173928 systemd-logind[1520]: Removed session 9. Aug 12 23:47:45.557650 kubelet[2659]: I0812 23:47:45.557491 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:47:46.499302 kubelet[2659]: I0812 23:47:46.499257 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:47:46.574168 containerd[1536]: time="2025-08-12T23:47:46.574117756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:46.575054 containerd[1536]: time="2025-08-12T23:47:46.574846236Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Aug 12 23:47:46.575811 containerd[1536]: time="2025-08-12T23:47:46.575761116Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:46.577855 containerd[1536]: time="2025-08-12T23:47:46.577808556Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:46.578609 containerd[1536]: time="2025-08-12T23:47:46.578568916Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.664316998s" Aug 12 23:47:46.578679 containerd[1536]: time="2025-08-12T23:47:46.578609196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Aug 12 23:47:46.581256 containerd[1536]: time="2025-08-12T23:47:46.581231236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 12 23:47:46.588339 containerd[1536]: time="2025-08-12T23:47:46.588268877Z" level=info msg="CreateContainer within sandbox \"c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 12 23:47:46.636927 containerd[1536]: time="2025-08-12T23:47:46.636874083Z" level=info msg="Container de1f6ba032eeedafba4fe7866ecd9d2b1340b79c07f54bb6d272cfe58ad24319: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:46.645841 containerd[1536]: time="2025-08-12T23:47:46.645702004Z" level=info msg="CreateContainer within sandbox \"c130e2183bdd89c0d79facf04547fcbd3e9e7f90e68b90c22ed6a03da3a7ce45\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"de1f6ba032eeedafba4fe7866ecd9d2b1340b79c07f54bb6d272cfe58ad24319\"" Aug 12 23:47:46.646745 containerd[1536]: time="2025-08-12T23:47:46.646710484Z" level=info msg="StartContainer for \"de1f6ba032eeedafba4fe7866ecd9d2b1340b79c07f54bb6d272cfe58ad24319\"" Aug 12 23:47:46.648584 containerd[1536]: time="2025-08-12T23:47:46.648557524Z" level=info msg="connecting to shim de1f6ba032eeedafba4fe7866ecd9d2b1340b79c07f54bb6d272cfe58ad24319" address="unix:///run/containerd/s/c234c043d31d33e5ff4571ad1f980461e8820c2305f03fdfd1e6372a246bc451" protocol=ttrpc version=3 Aug 12 23:47:46.654218 containerd[1536]: time="2025-08-12T23:47:46.653859324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a3ef371fc2716a3d82f04e28b6e1ba27802157b3762a34c5de0613bed69a435\" id:\"101e40185cea3cfbe74a50b8210f72b4cc339b0abe6a41e85abfe05be9916f7e\" pid:5138 exited_at:{seconds:1755042466 nanos:653440284}" Aug 12 23:47:46.679234 systemd[1]: Started cri-containerd-de1f6ba032eeedafba4fe7866ecd9d2b1340b79c07f54bb6d272cfe58ad24319.scope - libcontainer container de1f6ba032eeedafba4fe7866ecd9d2b1340b79c07f54bb6d272cfe58ad24319. Aug 12 23:47:46.730892 containerd[1536]: time="2025-08-12T23:47:46.730854213Z" level=info msg="StartContainer for \"de1f6ba032eeedafba4fe7866ecd9d2b1340b79c07f54bb6d272cfe58ad24319\" returns successfully" Aug 12 23:47:46.765188 containerd[1536]: time="2025-08-12T23:47:46.765029137Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a3ef371fc2716a3d82f04e28b6e1ba27802157b3762a34c5de0613bed69a435\" id:\"237d685a2b8e2ded901f89d673c0099e2eec4f791b9d8550751c48afeda536d9\" pid:5178 exited_at:{seconds:1755042466 nanos:764752937}" Aug 12 23:47:47.620130 containerd[1536]: time="2025-08-12T23:47:47.619620507Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de1f6ba032eeedafba4fe7866ecd9d2b1340b79c07f54bb6d272cfe58ad24319\" id:\"1eae17d090b751b1acb2ebf0955bfa76ca75f3a51c3d2ad88dc6c57cf1c17425\" pid:5238 exited_at:{seconds:1755042467 nanos:619369267}" Aug 12 23:47:47.634101 kubelet[2659]: I0812 23:47:47.633966 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6857dbbf66-llfb4" podStartSLOduration=22.797430445 podStartE2EDuration="27.633947028s" podCreationTimestamp="2025-08-12 23:47:20 +0000 UTC" firstStartedPulling="2025-08-12 23:47:41.744126653 +0000 UTC m=+44.479656177" lastFinishedPulling="2025-08-12 23:47:46.580643236 +0000 UTC m=+49.316172760" observedRunningTime="2025-08-12 23:47:47.584222583 +0000 UTC m=+50.319752147" watchObservedRunningTime="2025-08-12 23:47:47.633947028 +0000 UTC m=+50.369476552" Aug 12 23:47:47.937152 containerd[1536]: time="2025-08-12T23:47:47.937092860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:47.937983 containerd[1536]: time="2025-08-12T23:47:47.937908060Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Aug 12 23:47:47.938907 containerd[1536]: time="2025-08-12T23:47:47.938873500Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:47.940917 containerd[1536]: time="2025-08-12T23:47:47.940880340Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 12 23:47:47.941770 containerd[1536]: time="2025-08-12T23:47:47.941728860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.360352703s" Aug 12 23:47:47.941799 containerd[1536]: time="2025-08-12T23:47:47.941767700Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Aug 12 23:47:47.943849 containerd[1536]: time="2025-08-12T23:47:47.943819380Z" level=info msg="CreateContainer within sandbox \"25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 12 23:47:47.951319 containerd[1536]: time="2025-08-12T23:47:47.951155581Z" level=info msg="Container e8ec1fde4b90d74b857db20dd00022e15d03bc52b633dc44dc7824e29c61f333: CDI devices from CRI Config.CDIDevices: []" Aug 12 23:47:47.960870 containerd[1536]: time="2025-08-12T23:47:47.960821262Z" level=info msg="CreateContainer within sandbox \"25cee6737278dddb8fb118f8681c648d44c1fe3a064bdf4c6a453ec362e40431\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"e8ec1fde4b90d74b857db20dd00022e15d03bc52b633dc44dc7824e29c61f333\"" Aug 12 23:47:47.961533 containerd[1536]: time="2025-08-12T23:47:47.961411342Z" level=info msg="StartContainer for \"e8ec1fde4b90d74b857db20dd00022e15d03bc52b633dc44dc7824e29c61f333\"" Aug 12 23:47:47.962974 containerd[1536]: time="2025-08-12T23:47:47.962928542Z" level=info msg="connecting to shim e8ec1fde4b90d74b857db20dd00022e15d03bc52b633dc44dc7824e29c61f333" address="unix:///run/containerd/s/c893d3b969f95b06ac11177c750cd3a960d81a6e13a34f67c27ee047c54d27b4" protocol=ttrpc version=3 Aug 12 23:47:47.987269 systemd[1]: Started cri-containerd-e8ec1fde4b90d74b857db20dd00022e15d03bc52b633dc44dc7824e29c61f333.scope - libcontainer container e8ec1fde4b90d74b857db20dd00022e15d03bc52b633dc44dc7824e29c61f333. Aug 12 23:47:48.019061 containerd[1536]: time="2025-08-12T23:47:48.019023028Z" level=info msg="StartContainer for \"e8ec1fde4b90d74b857db20dd00022e15d03bc52b633dc44dc7824e29c61f333\" returns successfully" Aug 12 23:47:48.420250 kubelet[2659]: I0812 23:47:48.420205 2659 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 12 23:47:48.426713 kubelet[2659]: I0812 23:47:48.426679 2659 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 12 23:47:48.585343 kubelet[2659]: I0812 23:47:48.585265 2659 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pptww" podStartSLOduration=20.193421603 podStartE2EDuration="28.585246443s" podCreationTimestamp="2025-08-12 23:47:20 +0000 UTC" firstStartedPulling="2025-08-12 23:47:39.55083526 +0000 UTC m=+42.286364784" lastFinishedPulling="2025-08-12 23:47:47.9426601 +0000 UTC m=+50.678189624" observedRunningTime="2025-08-12 23:47:48.583809363 +0000 UTC m=+51.319338887" watchObservedRunningTime="2025-08-12 23:47:48.585246443 +0000 UTC m=+51.320775967" Aug 12 23:47:49.634285 containerd[1536]: time="2025-08-12T23:47:49.634105221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68\" id:\"9386856c4f31df72eff6c8d9cfbfa0b45a49c307e816f65fc118fde842521152\" pid:5298 exited_at:{seconds:1755042469 nanos:633740101}" Aug 12 23:47:50.176822 systemd[1]: Started sshd@9-10.0.0.67:22-10.0.0.1:44470.service - OpenSSH per-connection server daemon (10.0.0.1:44470). Aug 12 23:47:50.241031 sshd[5312]: Accepted publickey for core from 10.0.0.1 port 44470 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:47:50.242418 sshd-session[5312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:50.246286 systemd-logind[1520]: New session 10 of user core. Aug 12 23:47:50.259243 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 12 23:47:50.488573 sshd[5314]: Connection closed by 10.0.0.1 port 44470 Aug 12 23:47:50.487560 sshd-session[5312]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:50.496824 systemd[1]: sshd@9-10.0.0.67:22-10.0.0.1:44470.service: Deactivated successfully. Aug 12 23:47:50.500600 systemd[1]: session-10.scope: Deactivated successfully. Aug 12 23:47:50.502602 systemd-logind[1520]: Session 10 logged out. Waiting for processes to exit. Aug 12 23:47:50.505645 systemd[1]: Started sshd@10-10.0.0.67:22-10.0.0.1:44472.service - OpenSSH per-connection server daemon (10.0.0.1:44472). Aug 12 23:47:50.506704 systemd-logind[1520]: Removed session 10. Aug 12 23:47:50.553930 sshd[5328]: Accepted publickey for core from 10.0.0.1 port 44472 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:47:50.555298 sshd-session[5328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:50.559678 systemd-logind[1520]: New session 11 of user core. Aug 12 23:47:50.569288 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 12 23:47:50.785764 sshd[5330]: Connection closed by 10.0.0.1 port 44472 Aug 12 23:47:50.787320 sshd-session[5328]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:50.800069 systemd[1]: sshd@10-10.0.0.67:22-10.0.0.1:44472.service: Deactivated successfully. Aug 12 23:47:50.804323 systemd[1]: session-11.scope: Deactivated successfully. Aug 12 23:47:50.805125 systemd-logind[1520]: Session 11 logged out. Waiting for processes to exit. Aug 12 23:47:50.813659 systemd[1]: Started sshd@11-10.0.0.67:22-10.0.0.1:44488.service - OpenSSH per-connection server daemon (10.0.0.1:44488). Aug 12 23:47:50.814721 systemd-logind[1520]: Removed session 11. Aug 12 23:47:50.865326 sshd[5343]: Accepted publickey for core from 10.0.0.1 port 44488 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:47:50.866681 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:50.871756 systemd-logind[1520]: New session 12 of user core. Aug 12 23:47:50.880260 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 12 23:47:51.023580 sshd[5345]: Connection closed by 10.0.0.1 port 44488 Aug 12 23:47:51.023907 sshd-session[5343]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:51.026942 systemd[1]: sshd@11-10.0.0.67:22-10.0.0.1:44488.service: Deactivated successfully. Aug 12 23:47:51.028675 systemd[1]: session-12.scope: Deactivated successfully. Aug 12 23:47:51.030164 systemd-logind[1520]: Session 12 logged out. Waiting for processes to exit. Aug 12 23:47:51.032067 systemd-logind[1520]: Removed session 12. Aug 12 23:47:56.042694 systemd[1]: Started sshd@12-10.0.0.67:22-10.0.0.1:51880.service - OpenSSH per-connection server daemon (10.0.0.1:51880). Aug 12 23:47:56.081618 sshd[5371]: Accepted publickey for core from 10.0.0.1 port 51880 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:47:56.082824 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:47:56.087146 systemd-logind[1520]: New session 13 of user core. Aug 12 23:47:56.098243 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 12 23:47:56.247388 sshd[5373]: Connection closed by 10.0.0.1 port 51880 Aug 12 23:47:56.247727 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Aug 12 23:47:56.252262 systemd[1]: sshd@12-10.0.0.67:22-10.0.0.1:51880.service: Deactivated successfully. Aug 12 23:47:56.254021 systemd[1]: session-13.scope: Deactivated successfully. Aug 12 23:47:56.254742 systemd-logind[1520]: Session 13 logged out. Waiting for processes to exit. Aug 12 23:47:56.255812 systemd-logind[1520]: Removed session 13. Aug 12 23:47:57.148982 kubelet[2659]: I0812 23:47:57.148577 2659 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 12 23:48:01.263892 systemd[1]: Started sshd@13-10.0.0.67:22-10.0.0.1:51890.service - OpenSSH per-connection server daemon (10.0.0.1:51890). Aug 12 23:48:01.311921 sshd[5391]: Accepted publickey for core from 10.0.0.1 port 51890 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:48:01.313128 sshd-session[5391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:01.316969 systemd-logind[1520]: New session 14 of user core. Aug 12 23:48:01.326237 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 12 23:48:01.463776 sshd[5393]: Connection closed by 10.0.0.1 port 51890 Aug 12 23:48:01.464711 sshd-session[5391]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:01.470614 systemd[1]: sshd@13-10.0.0.67:22-10.0.0.1:51890.service: Deactivated successfully. Aug 12 23:48:01.472229 systemd[1]: session-14.scope: Deactivated successfully. Aug 12 23:48:01.474328 systemd-logind[1520]: Session 14 logged out. Waiting for processes to exit. Aug 12 23:48:01.475355 systemd-logind[1520]: Removed session 14. Aug 12 23:48:06.479408 systemd[1]: Started sshd@14-10.0.0.67:22-10.0.0.1:50866.service - OpenSSH per-connection server daemon (10.0.0.1:50866). Aug 12 23:48:06.527959 sshd[5411]: Accepted publickey for core from 10.0.0.1 port 50866 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:48:06.529188 sshd-session[5411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:06.532777 systemd-logind[1520]: New session 15 of user core. Aug 12 23:48:06.544217 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 12 23:48:06.693545 sshd[5413]: Connection closed by 10.0.0.1 port 50866 Aug 12 23:48:06.694052 sshd-session[5411]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:06.697900 systemd-logind[1520]: Session 15 logged out. Waiting for processes to exit. Aug 12 23:48:06.698133 systemd[1]: sshd@14-10.0.0.67:22-10.0.0.1:50866.service: Deactivated successfully. Aug 12 23:48:06.700423 systemd[1]: session-15.scope: Deactivated successfully. Aug 12 23:48:06.702135 systemd-logind[1520]: Removed session 15. Aug 12 23:48:11.709721 systemd[1]: Started sshd@15-10.0.0.67:22-10.0.0.1:50876.service - OpenSSH per-connection server daemon (10.0.0.1:50876). Aug 12 23:48:11.780896 sshd[5430]: Accepted publickey for core from 10.0.0.1 port 50876 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:48:11.782326 sshd-session[5430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:11.786368 systemd-logind[1520]: New session 16 of user core. Aug 12 23:48:11.796273 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 12 23:48:11.953070 sshd[5432]: Connection closed by 10.0.0.1 port 50876 Aug 12 23:48:11.953689 sshd-session[5430]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:11.963311 systemd[1]: sshd@15-10.0.0.67:22-10.0.0.1:50876.service: Deactivated successfully. Aug 12 23:48:11.965275 systemd[1]: session-16.scope: Deactivated successfully. Aug 12 23:48:11.966707 systemd-logind[1520]: Session 16 logged out. Waiting for processes to exit. Aug 12 23:48:11.968807 systemd[1]: Started sshd@16-10.0.0.67:22-10.0.0.1:50890.service - OpenSSH per-connection server daemon (10.0.0.1:50890). Aug 12 23:48:11.969955 systemd-logind[1520]: Removed session 16. Aug 12 23:48:12.020544 sshd[5447]: Accepted publickey for core from 10.0.0.1 port 50890 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:48:12.021750 sshd-session[5447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:12.025913 systemd-logind[1520]: New session 17 of user core. Aug 12 23:48:12.036245 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 12 23:48:12.285022 sshd[5449]: Connection closed by 10.0.0.1 port 50890 Aug 12 23:48:12.285337 sshd-session[5447]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:12.299413 systemd[1]: sshd@16-10.0.0.67:22-10.0.0.1:50890.service: Deactivated successfully. Aug 12 23:48:12.301478 systemd[1]: session-17.scope: Deactivated successfully. Aug 12 23:48:12.302258 systemd-logind[1520]: Session 17 logged out. Waiting for processes to exit. Aug 12 23:48:12.304913 systemd[1]: Started sshd@17-10.0.0.67:22-10.0.0.1:50902.service - OpenSSH per-connection server daemon (10.0.0.1:50902). Aug 12 23:48:12.305718 systemd-logind[1520]: Removed session 17. Aug 12 23:48:12.351540 sshd[5461]: Accepted publickey for core from 10.0.0.1 port 50902 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:48:12.352657 sshd-session[5461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:12.357024 systemd-logind[1520]: New session 18 of user core. Aug 12 23:48:12.370275 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 12 23:48:12.991636 sshd[5463]: Connection closed by 10.0.0.1 port 50902 Aug 12 23:48:12.991962 sshd-session[5461]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:13.004675 systemd[1]: sshd@17-10.0.0.67:22-10.0.0.1:50902.service: Deactivated successfully. Aug 12 23:48:13.007181 systemd[1]: session-18.scope: Deactivated successfully. Aug 12 23:48:13.009458 systemd-logind[1520]: Session 18 logged out. Waiting for processes to exit. Aug 12 23:48:13.020454 systemd[1]: Started sshd@18-10.0.0.67:22-10.0.0.1:39136.service - OpenSSH per-connection server daemon (10.0.0.1:39136). Aug 12 23:48:13.022406 systemd-logind[1520]: Removed session 18. Aug 12 23:48:13.065439 sshd[5484]: Accepted publickey for core from 10.0.0.1 port 39136 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:48:13.066653 sshd-session[5484]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:13.071057 systemd-logind[1520]: New session 19 of user core. Aug 12 23:48:13.084256 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 12 23:48:13.386520 sshd[5486]: Connection closed by 10.0.0.1 port 39136 Aug 12 23:48:13.387065 sshd-session[5484]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:13.399482 systemd[1]: sshd@18-10.0.0.67:22-10.0.0.1:39136.service: Deactivated successfully. Aug 12 23:48:13.404912 systemd[1]: session-19.scope: Deactivated successfully. Aug 12 23:48:13.405684 systemd-logind[1520]: Session 19 logged out. Waiting for processes to exit. Aug 12 23:48:13.409534 systemd[1]: Started sshd@19-10.0.0.67:22-10.0.0.1:39146.service - OpenSSH per-connection server daemon (10.0.0.1:39146). Aug 12 23:48:13.410262 systemd-logind[1520]: Removed session 19. Aug 12 23:48:13.457678 sshd[5503]: Accepted publickey for core from 10.0.0.1 port 39146 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:48:13.459136 sshd-session[5503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:13.466117 systemd-logind[1520]: New session 20 of user core. Aug 12 23:48:13.484299 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 12 23:48:13.609269 sshd[5505]: Connection closed by 10.0.0.1 port 39146 Aug 12 23:48:13.609603 sshd-session[5503]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:13.613103 systemd[1]: sshd@19-10.0.0.67:22-10.0.0.1:39146.service: Deactivated successfully. Aug 12 23:48:13.619139 systemd[1]: session-20.scope: Deactivated successfully. Aug 12 23:48:13.622484 systemd-logind[1520]: Session 20 logged out. Waiting for processes to exit. Aug 12 23:48:13.623983 systemd-logind[1520]: Removed session 20. Aug 12 23:48:13.627172 containerd[1536]: time="2025-08-12T23:48:13.627130194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"742a468369d48e49ff5b1cc311b3177f2441ca3683a31f5dbd057b711667cc68\" id:\"6b7fa63499343b968269663bfe6d937e575fb681fc59b3105e010b2d5194c13d\" pid:5527 exited_at:{seconds:1755042493 nanos:626729872}" Aug 12 23:48:16.731105 containerd[1536]: time="2025-08-12T23:48:16.731046511Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9a3ef371fc2716a3d82f04e28b6e1ba27802157b3762a34c5de0613bed69a435\" id:\"375ce18f1976aad5df17e53e7e66e0a2279540975dea4a8f963563dd57aea82f\" pid:5555 exited_at:{seconds:1755042496 nanos:730710310}" Aug 12 23:48:17.592248 containerd[1536]: time="2025-08-12T23:48:17.592196030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de1f6ba032eeedafba4fe7866ecd9d2b1340b79c07f54bb6d272cfe58ad24319\" id:\"dbf4bf0fde1c6555ad0c97a5d765d0fe19d3c8b2884289d30c23057ac3432d13\" pid:5582 exited_at:{seconds:1755042497 nanos:591970589}" Aug 12 23:48:18.629329 systemd[1]: Started sshd@20-10.0.0.67:22-10.0.0.1:39156.service - OpenSSH per-connection server daemon (10.0.0.1:39156). Aug 12 23:48:18.690145 sshd[5593]: Accepted publickey for core from 10.0.0.1 port 39156 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:48:18.691805 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:18.696170 systemd-logind[1520]: New session 21 of user core. Aug 12 23:48:18.707194 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 12 23:48:18.838104 sshd[5595]: Connection closed by 10.0.0.1 port 39156 Aug 12 23:48:18.838587 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:18.841994 systemd[1]: sshd@20-10.0.0.67:22-10.0.0.1:39156.service: Deactivated successfully. Aug 12 23:48:18.843782 systemd[1]: session-21.scope: Deactivated successfully. Aug 12 23:48:18.844492 systemd-logind[1520]: Session 21 logged out. Waiting for processes to exit. Aug 12 23:48:18.845511 systemd-logind[1520]: Removed session 21. Aug 12 23:48:23.849374 systemd[1]: Started sshd@21-10.0.0.67:22-10.0.0.1:55948.service - OpenSSH per-connection server daemon (10.0.0.1:55948). Aug 12 23:48:23.905003 sshd[5610]: Accepted publickey for core from 10.0.0.1 port 55948 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:48:23.906240 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:23.910026 systemd-logind[1520]: New session 22 of user core. Aug 12 23:48:23.924235 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 12 23:48:24.108574 sshd[5612]: Connection closed by 10.0.0.1 port 55948 Aug 12 23:48:24.109147 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:24.112645 systemd[1]: sshd@21-10.0.0.67:22-10.0.0.1:55948.service: Deactivated successfully. Aug 12 23:48:24.114452 systemd[1]: session-22.scope: Deactivated successfully. Aug 12 23:48:24.115112 systemd-logind[1520]: Session 22 logged out. Waiting for processes to exit. Aug 12 23:48:24.116548 systemd-logind[1520]: Removed session 22. Aug 12 23:48:29.128431 systemd[1]: Started sshd@22-10.0.0.67:22-10.0.0.1:55954.service - OpenSSH per-connection server daemon (10.0.0.1:55954). Aug 12 23:48:29.188925 sshd[5625]: Accepted publickey for core from 10.0.0.1 port 55954 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:48:29.190158 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:29.193766 systemd-logind[1520]: New session 23 of user core. Aug 12 23:48:29.206363 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 12 23:48:29.374177 sshd[5627]: Connection closed by 10.0.0.1 port 55954 Aug 12 23:48:29.374853 sshd-session[5625]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:29.381465 systemd[1]: sshd@22-10.0.0.67:22-10.0.0.1:55954.service: Deactivated successfully. Aug 12 23:48:29.383185 systemd[1]: session-23.scope: Deactivated successfully. Aug 12 23:48:29.383930 systemd-logind[1520]: Session 23 logged out. Waiting for processes to exit. Aug 12 23:48:29.385164 systemd-logind[1520]: Removed session 23. Aug 12 23:48:31.051361 containerd[1536]: time="2025-08-12T23:48:31.050851297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de1f6ba032eeedafba4fe7866ecd9d2b1340b79c07f54bb6d272cfe58ad24319\" id:\"e1c01095b09e726e17b2b17f85a70518e7e9fc076d459cbabe2123144827f330\" pid:5652 exited_at:{seconds:1755042511 nanos:50516576}" Aug 12 23:48:34.389710 systemd[1]: Started sshd@23-10.0.0.67:22-10.0.0.1:35448.service - OpenSSH per-connection server daemon (10.0.0.1:35448). Aug 12 23:48:34.462480 sshd[5664]: Accepted publickey for core from 10.0.0.1 port 35448 ssh2: RSA SHA256:Bk7uJ3DDK+Y7ogf3dGZLP447i4jtLnzkQos038lnf/E Aug 12 23:48:34.463929 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 12 23:48:34.471856 systemd-logind[1520]: New session 24 of user core. Aug 12 23:48:34.478244 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 12 23:48:34.716668 sshd[5666]: Connection closed by 10.0.0.1 port 35448 Aug 12 23:48:34.717686 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Aug 12 23:48:34.724317 systemd[1]: sshd@23-10.0.0.67:22-10.0.0.1:35448.service: Deactivated successfully. Aug 12 23:48:34.725982 systemd[1]: session-24.scope: Deactivated successfully. Aug 12 23:48:34.728386 systemd-logind[1520]: Session 24 logged out. Waiting for processes to exit. Aug 12 23:48:34.730785 systemd-logind[1520]: Removed session 24.