Jul 11 05:11:44.785654 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 11 05:11:44.785675 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Jul 11 03:37:34 -00 2025 Jul 11 05:11:44.785685 kernel: KASLR enabled Jul 11 05:11:44.785691 kernel: efi: EFI v2.7 by EDK II Jul 11 05:11:44.785697 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 11 05:11:44.785702 kernel: random: crng init done Jul 11 05:11:44.785709 kernel: secureboot: Secure boot disabled Jul 11 05:11:44.785715 kernel: ACPI: Early table checksum verification disabled Jul 11 05:11:44.785722 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 11 05:11:44.785729 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 11 05:11:44.785735 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:11:44.785741 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:11:44.785747 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:11:44.785754 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:11:44.785761 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:11:44.785769 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:11:44.785775 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:11:44.785782 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:11:44.785788 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 11 05:11:44.785795 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 11 05:11:44.785801 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 11 05:11:44.785807 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 05:11:44.785814 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 11 05:11:44.785820 kernel: Zone ranges: Jul 11 05:11:44.785827 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 05:11:44.785834 kernel: DMA32 empty Jul 11 05:11:44.785840 kernel: Normal empty Jul 11 05:11:44.785847 kernel: Device empty Jul 11 05:11:44.785853 kernel: Movable zone start for each node Jul 11 05:11:44.785859 kernel: Early memory node ranges Jul 11 05:11:44.785866 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 11 05:11:44.785874 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 11 05:11:44.785882 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 11 05:11:44.785889 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 11 05:11:44.785896 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 11 05:11:44.785903 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 11 05:11:44.785909 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 11 05:11:44.785917 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 11 05:11:44.785923 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 11 05:11:44.785929 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 11 05:11:44.785938 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 11 05:11:44.785945 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 11 05:11:44.785952 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 11 05:11:44.785960 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 11 05:11:44.785978 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 11 05:11:44.785985 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 11 05:11:44.785992 kernel: psci: probing for conduit method from ACPI. Jul 11 05:11:44.785999 kernel: psci: PSCIv1.1 detected in firmware. Jul 11 05:11:44.786006 kernel: psci: Using standard PSCI v0.2 function IDs Jul 11 05:11:44.786012 kernel: psci: Trusted OS migration not required Jul 11 05:11:44.786019 kernel: psci: SMC Calling Convention v1.1 Jul 11 05:11:44.786026 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 11 05:11:44.786033 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 11 05:11:44.786042 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 11 05:11:44.786049 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 11 05:11:44.786055 kernel: Detected PIPT I-cache on CPU0 Jul 11 05:11:44.786062 kernel: CPU features: detected: GIC system register CPU interface Jul 11 05:11:44.786069 kernel: CPU features: detected: Spectre-v4 Jul 11 05:11:44.786076 kernel: CPU features: detected: Spectre-BHB Jul 11 05:11:44.786083 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 11 05:11:44.786090 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 11 05:11:44.786096 kernel: CPU features: detected: ARM erratum 1418040 Jul 11 05:11:44.786103 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 11 05:11:44.786115 kernel: alternatives: applying boot alternatives Jul 11 05:11:44.786123 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c3897e9e5bdb5872ff4c86729cf311c0e9d40949a2432461ec9aeef8c2526e01 Jul 11 05:11:44.786132 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 11 05:11:44.786139 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 11 05:11:44.786146 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 11 05:11:44.786152 kernel: Fallback order for Node 0: 0 Jul 11 05:11:44.786159 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 11 05:11:44.786166 kernel: Policy zone: DMA Jul 11 05:11:44.786173 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 11 05:11:44.786179 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 11 05:11:44.786186 kernel: software IO TLB: area num 4. Jul 11 05:11:44.786193 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 11 05:11:44.786200 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 11 05:11:44.786208 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 11 05:11:44.786215 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 11 05:11:44.786222 kernel: rcu: RCU event tracing is enabled. Jul 11 05:11:44.786229 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 11 05:11:44.786236 kernel: Trampoline variant of Tasks RCU enabled. Jul 11 05:11:44.786243 kernel: Tracing variant of Tasks RCU enabled. Jul 11 05:11:44.786250 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 11 05:11:44.786257 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 11 05:11:44.786263 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 05:11:44.786270 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 11 05:11:44.786277 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 11 05:11:44.786285 kernel: GICv3: 256 SPIs implemented Jul 11 05:11:44.786292 kernel: GICv3: 0 Extended SPIs implemented Jul 11 05:11:44.786299 kernel: Root IRQ handler: gic_handle_irq Jul 11 05:11:44.786306 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 11 05:11:44.786313 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 11 05:11:44.786319 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 11 05:11:44.786326 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 11 05:11:44.786333 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 11 05:11:44.786340 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 11 05:11:44.786347 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 11 05:11:44.786354 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 11 05:11:44.786360 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 11 05:11:44.786369 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 05:11:44.786375 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 11 05:11:44.786382 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 11 05:11:44.786389 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 11 05:11:44.786396 kernel: arm-pv: using stolen time PV Jul 11 05:11:44.786403 kernel: Console: colour dummy device 80x25 Jul 11 05:11:44.786410 kernel: ACPI: Core revision 20240827 Jul 11 05:11:44.786418 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 11 05:11:44.786425 kernel: pid_max: default: 32768 minimum: 301 Jul 11 05:11:44.786432 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 11 05:11:44.786440 kernel: landlock: Up and running. Jul 11 05:11:44.786447 kernel: SELinux: Initializing. Jul 11 05:11:44.786454 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 05:11:44.786461 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 11 05:11:44.786468 kernel: rcu: Hierarchical SRCU implementation. Jul 11 05:11:44.786476 kernel: rcu: Max phase no-delay instances is 400. Jul 11 05:11:44.786483 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 11 05:11:44.786490 kernel: Remapping and enabling EFI services. Jul 11 05:11:44.786497 kernel: smp: Bringing up secondary CPUs ... Jul 11 05:11:44.786509 kernel: Detected PIPT I-cache on CPU1 Jul 11 05:11:44.786517 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 11 05:11:44.786524 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 11 05:11:44.786533 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 05:11:44.786540 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 11 05:11:44.786548 kernel: Detected PIPT I-cache on CPU2 Jul 11 05:11:44.786555 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 11 05:11:44.786563 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 11 05:11:44.786572 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 05:11:44.786579 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 11 05:11:44.786586 kernel: Detected PIPT I-cache on CPU3 Jul 11 05:11:44.786594 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 11 05:11:44.786601 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 11 05:11:44.786608 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 11 05:11:44.786615 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 11 05:11:44.786623 kernel: smp: Brought up 1 node, 4 CPUs Jul 11 05:11:44.786630 kernel: SMP: Total of 4 processors activated. Jul 11 05:11:44.786639 kernel: CPU: All CPU(s) started at EL1 Jul 11 05:11:44.786646 kernel: CPU features: detected: 32-bit EL0 Support Jul 11 05:11:44.786654 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 11 05:11:44.786661 kernel: CPU features: detected: Common not Private translations Jul 11 05:11:44.786668 kernel: CPU features: detected: CRC32 instructions Jul 11 05:11:44.786676 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 11 05:11:44.786683 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 11 05:11:44.786690 kernel: CPU features: detected: LSE atomic instructions Jul 11 05:11:44.786698 kernel: CPU features: detected: Privileged Access Never Jul 11 05:11:44.786706 kernel: CPU features: detected: RAS Extension Support Jul 11 05:11:44.786714 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 11 05:11:44.786721 kernel: alternatives: applying system-wide alternatives Jul 11 05:11:44.786729 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 11 05:11:44.786736 kernel: Memory: 2424032K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 125920K reserved, 16384K cma-reserved) Jul 11 05:11:44.786744 kernel: devtmpfs: initialized Jul 11 05:11:44.786751 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 11 05:11:44.786759 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 11 05:11:44.786766 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 11 05:11:44.786775 kernel: 0 pages in range for non-PLT usage Jul 11 05:11:44.786782 kernel: 508448 pages in range for PLT usage Jul 11 05:11:44.786790 kernel: pinctrl core: initialized pinctrl subsystem Jul 11 05:11:44.786797 kernel: SMBIOS 3.0.0 present. Jul 11 05:11:44.786804 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 11 05:11:44.786812 kernel: DMI: Memory slots populated: 1/1 Jul 11 05:11:44.786819 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 11 05:11:44.786827 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 11 05:11:44.786834 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 11 05:11:44.786843 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 11 05:11:44.786850 kernel: audit: initializing netlink subsys (disabled) Jul 11 05:11:44.786858 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Jul 11 05:11:44.786865 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 11 05:11:44.786872 kernel: cpuidle: using governor menu Jul 11 05:11:44.786880 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 11 05:11:44.786887 kernel: ASID allocator initialised with 32768 entries Jul 11 05:11:44.786895 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 11 05:11:44.786902 kernel: Serial: AMBA PL011 UART driver Jul 11 05:11:44.786910 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 11 05:11:44.786918 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 11 05:11:44.786925 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 11 05:11:44.786933 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 11 05:11:44.786940 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 11 05:11:44.786947 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 11 05:11:44.786955 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 11 05:11:44.786962 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 11 05:11:44.786975 kernel: ACPI: Added _OSI(Module Device) Jul 11 05:11:44.786984 kernel: ACPI: Added _OSI(Processor Device) Jul 11 05:11:44.786992 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 11 05:11:44.786999 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 11 05:11:44.787007 kernel: ACPI: Interpreter enabled Jul 11 05:11:44.787014 kernel: ACPI: Using GIC for interrupt routing Jul 11 05:11:44.787021 kernel: ACPI: MCFG table detected, 1 entries Jul 11 05:11:44.787029 kernel: ACPI: CPU0 has been hot-added Jul 11 05:11:44.787036 kernel: ACPI: CPU1 has been hot-added Jul 11 05:11:44.787043 kernel: ACPI: CPU2 has been hot-added Jul 11 05:11:44.787051 kernel: ACPI: CPU3 has been hot-added Jul 11 05:11:44.787059 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 11 05:11:44.787067 kernel: printk: legacy console [ttyAMA0] enabled Jul 11 05:11:44.787074 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 11 05:11:44.787209 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 11 05:11:44.787277 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 11 05:11:44.787338 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 11 05:11:44.787398 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 11 05:11:44.787459 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 11 05:11:44.787469 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 11 05:11:44.787477 kernel: PCI host bridge to bus 0000:00 Jul 11 05:11:44.787543 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 11 05:11:44.787599 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 11 05:11:44.787654 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 11 05:11:44.787708 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 11 05:11:44.787787 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 11 05:11:44.787863 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 11 05:11:44.787927 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 11 05:11:44.788005 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 11 05:11:44.788070 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 11 05:11:44.788145 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 11 05:11:44.788209 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 11 05:11:44.788276 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 11 05:11:44.788333 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 11 05:11:44.788388 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 11 05:11:44.788444 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 11 05:11:44.788454 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 11 05:11:44.788461 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 11 05:11:44.788469 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 11 05:11:44.788478 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 11 05:11:44.788485 kernel: iommu: Default domain type: Translated Jul 11 05:11:44.788492 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 11 05:11:44.788500 kernel: efivars: Registered efivars operations Jul 11 05:11:44.788507 kernel: vgaarb: loaded Jul 11 05:11:44.788515 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 11 05:11:44.788555 kernel: VFS: Disk quotas dquot_6.6.0 Jul 11 05:11:44.788563 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 11 05:11:44.788571 kernel: pnp: PnP ACPI init Jul 11 05:11:44.788646 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 11 05:11:44.788657 kernel: pnp: PnP ACPI: found 1 devices Jul 11 05:11:44.788665 kernel: NET: Registered PF_INET protocol family Jul 11 05:11:44.788672 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 11 05:11:44.788679 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 11 05:11:44.788687 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 11 05:11:44.788694 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 11 05:11:44.788702 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 11 05:11:44.788711 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 11 05:11:44.788718 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 05:11:44.788726 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 11 05:11:44.788733 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 11 05:11:44.788740 kernel: PCI: CLS 0 bytes, default 64 Jul 11 05:11:44.788747 kernel: kvm [1]: HYP mode not available Jul 11 05:11:44.788755 kernel: Initialise system trusted keyrings Jul 11 05:11:44.788762 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 11 05:11:44.788770 kernel: Key type asymmetric registered Jul 11 05:11:44.788778 kernel: Asymmetric key parser 'x509' registered Jul 11 05:11:44.788786 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 11 05:11:44.788793 kernel: io scheduler mq-deadline registered Jul 11 05:11:44.788819 kernel: io scheduler kyber registered Jul 11 05:11:44.788835 kernel: io scheduler bfq registered Jul 11 05:11:44.788843 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 11 05:11:44.788851 kernel: ACPI: button: Power Button [PWRB] Jul 11 05:11:44.788859 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 11 05:11:44.788928 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 11 05:11:44.788940 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 11 05:11:44.788947 kernel: thunder_xcv, ver 1.0 Jul 11 05:11:44.788954 kernel: thunder_bgx, ver 1.0 Jul 11 05:11:44.788962 kernel: nicpf, ver 1.0 Jul 11 05:11:44.789025 kernel: nicvf, ver 1.0 Jul 11 05:11:44.789104 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 11 05:11:44.789176 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-11T05:11:44 UTC (1752210704) Jul 11 05:11:44.789186 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 11 05:11:44.789196 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 11 05:11:44.789204 kernel: watchdog: NMI not fully supported Jul 11 05:11:44.789211 kernel: watchdog: Hard watchdog permanently disabled Jul 11 05:11:44.789218 kernel: NET: Registered PF_INET6 protocol family Jul 11 05:11:44.789226 kernel: Segment Routing with IPv6 Jul 11 05:11:44.789233 kernel: In-situ OAM (IOAM) with IPv6 Jul 11 05:11:44.789240 kernel: NET: Registered PF_PACKET protocol family Jul 11 05:11:44.789247 kernel: Key type dns_resolver registered Jul 11 05:11:44.789255 kernel: registered taskstats version 1 Jul 11 05:11:44.789262 kernel: Loading compiled-in X.509 certificates Jul 11 05:11:44.789271 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: e555124bc12a1bc970fb227548e219a82d747130' Jul 11 05:11:44.789278 kernel: Demotion targets for Node 0: null Jul 11 05:11:44.789285 kernel: Key type .fscrypt registered Jul 11 05:11:44.789293 kernel: Key type fscrypt-provisioning registered Jul 11 05:11:44.789300 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 11 05:11:44.789307 kernel: ima: Allocated hash algorithm: sha1 Jul 11 05:11:44.789315 kernel: ima: No architecture policies found Jul 11 05:11:44.789322 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 11 05:11:44.789330 kernel: clk: Disabling unused clocks Jul 11 05:11:44.789338 kernel: PM: genpd: Disabling unused power domains Jul 11 05:11:44.789345 kernel: Warning: unable to open an initial console. Jul 11 05:11:44.789353 kernel: Freeing unused kernel memory: 39424K Jul 11 05:11:44.789360 kernel: Run /init as init process Jul 11 05:11:44.789367 kernel: with arguments: Jul 11 05:11:44.789374 kernel: /init Jul 11 05:11:44.789381 kernel: with environment: Jul 11 05:11:44.789388 kernel: HOME=/ Jul 11 05:11:44.789396 kernel: TERM=linux Jul 11 05:11:44.789404 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 11 05:11:44.789412 systemd[1]: Successfully made /usr/ read-only. Jul 11 05:11:44.789422 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 05:11:44.789431 systemd[1]: Detected virtualization kvm. Jul 11 05:11:44.789438 systemd[1]: Detected architecture arm64. Jul 11 05:11:44.789446 systemd[1]: Running in initrd. Jul 11 05:11:44.789453 systemd[1]: No hostname configured, using default hostname. Jul 11 05:11:44.789463 systemd[1]: Hostname set to . Jul 11 05:11:44.789471 systemd[1]: Initializing machine ID from VM UUID. Jul 11 05:11:44.789478 systemd[1]: Queued start job for default target initrd.target. Jul 11 05:11:44.789486 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:11:44.789494 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:11:44.789502 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 11 05:11:44.789510 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 05:11:44.789518 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 11 05:11:44.789529 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 11 05:11:44.789537 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 11 05:11:44.789546 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 11 05:11:44.789554 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:11:44.789562 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:11:44.789570 systemd[1]: Reached target paths.target - Path Units. Jul 11 05:11:44.789577 systemd[1]: Reached target slices.target - Slice Units. Jul 11 05:11:44.789586 systemd[1]: Reached target swap.target - Swaps. Jul 11 05:11:44.789598 systemd[1]: Reached target timers.target - Timer Units. Jul 11 05:11:44.789608 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 05:11:44.789620 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 05:11:44.789630 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 11 05:11:44.789638 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 11 05:11:44.789646 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:11:44.789654 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 05:11:44.789663 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:11:44.789671 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 05:11:44.789679 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 11 05:11:44.789687 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 05:11:44.789694 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 11 05:11:44.789703 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 11 05:11:44.789711 systemd[1]: Starting systemd-fsck-usr.service... Jul 11 05:11:44.789719 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 05:11:44.789726 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 05:11:44.789736 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:11:44.789744 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:11:44.789752 systemd[1]: Finished systemd-fsck-usr.service. Jul 11 05:11:44.789760 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 11 05:11:44.789769 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 05:11:44.789793 systemd-journald[244]: Collecting audit messages is disabled. Jul 11 05:11:44.789813 systemd-journald[244]: Journal started Jul 11 05:11:44.789833 systemd-journald[244]: Runtime Journal (/run/log/journal/fae4f22f28c04950a3ccf8f9f37ae141) is 6M, max 48.5M, 42.4M free. Jul 11 05:11:44.783846 systemd-modules-load[245]: Inserted module 'overlay' Jul 11 05:11:44.792997 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 05:11:44.794384 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:11:44.796017 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:11:44.798546 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 11 05:11:44.799910 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 05:11:44.803956 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 11 05:11:44.806624 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 11 05:11:44.807288 kernel: Bridge firewalling registered Jul 11 05:11:44.813589 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 05:11:44.814615 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 05:11:44.818123 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:11:44.821499 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 11 05:11:44.823104 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:11:44.824335 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:11:44.828019 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 05:11:44.829014 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:11:44.831636 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 11 05:11:44.833469 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 05:11:44.864490 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=c3897e9e5bdb5872ff4c86729cf311c0e9d40949a2432461ec9aeef8c2526e01 Jul 11 05:11:44.879128 systemd-resolved[287]: Positive Trust Anchors: Jul 11 05:11:44.879145 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 05:11:44.879176 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 05:11:44.883832 systemd-resolved[287]: Defaulting to hostname 'linux'. Jul 11 05:11:44.884766 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 05:11:44.885945 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:11:44.939004 kernel: SCSI subsystem initialized Jul 11 05:11:44.944989 kernel: Loading iSCSI transport class v2.0-870. Jul 11 05:11:44.951988 kernel: iscsi: registered transport (tcp) Jul 11 05:11:44.964997 kernel: iscsi: registered transport (qla4xxx) Jul 11 05:11:44.965036 kernel: QLogic iSCSI HBA Driver Jul 11 05:11:44.980484 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 05:11:44.994954 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:11:44.996118 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 05:11:45.039026 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 11 05:11:45.040938 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 11 05:11:45.100020 kernel: raid6: neonx8 gen() 15786 MB/s Jul 11 05:11:45.116982 kernel: raid6: neonx4 gen() 15711 MB/s Jul 11 05:11:45.134004 kernel: raid6: neonx2 gen() 13122 MB/s Jul 11 05:11:45.150988 kernel: raid6: neonx1 gen() 10444 MB/s Jul 11 05:11:45.168002 kernel: raid6: int64x8 gen() 6896 MB/s Jul 11 05:11:45.184997 kernel: raid6: int64x4 gen() 7330 MB/s Jul 11 05:11:45.201983 kernel: raid6: int64x2 gen() 6082 MB/s Jul 11 05:11:45.218994 kernel: raid6: int64x1 gen() 5036 MB/s Jul 11 05:11:45.219018 kernel: raid6: using algorithm neonx8 gen() 15786 MB/s Jul 11 05:11:45.235990 kernel: raid6: .... xor() 12003 MB/s, rmw enabled Jul 11 05:11:45.236003 kernel: raid6: using neon recovery algorithm Jul 11 05:11:45.240985 kernel: xor: measuring software checksum speed Jul 11 05:11:45.241001 kernel: 8regs : 21641 MB/sec Jul 11 05:11:45.242304 kernel: 32regs : 19995 MB/sec Jul 11 05:11:45.242327 kernel: arm64_neon : 28032 MB/sec Jul 11 05:11:45.242346 kernel: xor: using function: arm64_neon (28032 MB/sec) Jul 11 05:11:45.296995 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 11 05:11:45.304044 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 11 05:11:45.306173 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:11:45.335362 systemd-udevd[497]: Using default interface naming scheme 'v255'. Jul 11 05:11:45.339365 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:11:45.340928 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 11 05:11:45.364658 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Jul 11 05:11:45.385154 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 05:11:45.387013 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 05:11:45.449993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:11:45.451782 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 11 05:11:45.494994 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 11 05:11:45.500520 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 11 05:11:45.504400 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 11 05:11:45.504440 kernel: GPT:9289727 != 19775487 Jul 11 05:11:45.504450 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 11 05:11:45.505356 kernel: GPT:9289727 != 19775487 Jul 11 05:11:45.505392 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 11 05:11:45.507075 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:11:45.507057 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 05:11:45.507194 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:11:45.509539 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:11:45.511206 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:11:45.535950 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 11 05:11:45.537552 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:11:45.544345 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 11 05:11:45.550821 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 11 05:11:45.551744 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 11 05:11:45.560067 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 11 05:11:45.571252 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 05:11:45.572159 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 05:11:45.573744 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:11:45.575240 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 05:11:45.577358 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 11 05:11:45.578838 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 11 05:11:45.595458 disk-uuid[589]: Primary Header is updated. Jul 11 05:11:45.595458 disk-uuid[589]: Secondary Entries is updated. Jul 11 05:11:45.595458 disk-uuid[589]: Secondary Header is updated. Jul 11 05:11:45.598993 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:11:45.599550 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 11 05:11:46.611986 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 11 05:11:46.612697 disk-uuid[593]: The operation has completed successfully. Jul 11 05:11:46.637384 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 11 05:11:46.637480 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 11 05:11:46.660676 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 11 05:11:46.671516 sh[610]: Success Jul 11 05:11:46.687234 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 11 05:11:46.687280 kernel: device-mapper: uevent: version 1.0.3 Jul 11 05:11:46.687291 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 11 05:11:46.699004 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 11 05:11:46.721615 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 11 05:11:46.723980 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 11 05:11:46.741025 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 11 05:11:46.747275 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 11 05:11:46.747303 kernel: BTRFS: device fsid 3cc53545-bcff-43a4-a907-3a89bda31132 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (622) Jul 11 05:11:46.748408 kernel: BTRFS info (device dm-0): first mount of filesystem 3cc53545-bcff-43a4-a907-3a89bda31132 Jul 11 05:11:46.748425 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 11 05:11:46.749039 kernel: BTRFS info (device dm-0): using free-space-tree Jul 11 05:11:46.752626 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 11 05:11:46.753623 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 11 05:11:46.754674 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 11 05:11:46.755322 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 11 05:11:46.757699 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 11 05:11:46.777790 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (655) Jul 11 05:11:46.777824 kernel: BTRFS info (device vda6): first mount of filesystem 8b6d4331-e552-452c-ad36-39a2024f4534 Jul 11 05:11:46.777835 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 05:11:46.778982 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:11:46.783985 kernel: BTRFS info (device vda6): last unmount of filesystem 8b6d4331-e552-452c-ad36-39a2024f4534 Jul 11 05:11:46.785017 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 11 05:11:46.786665 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 11 05:11:46.851667 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 05:11:46.854962 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 05:11:46.894684 systemd-networkd[797]: lo: Link UP Jul 11 05:11:46.894696 systemd-networkd[797]: lo: Gained carrier Jul 11 05:11:46.895378 systemd-networkd[797]: Enumeration completed Jul 11 05:11:46.895687 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 05:11:46.895789 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:11:46.895792 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 05:11:46.896360 systemd-networkd[797]: eth0: Link UP Jul 11 05:11:46.896363 systemd-networkd[797]: eth0: Gained carrier Jul 11 05:11:46.896370 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:11:46.897133 systemd[1]: Reached target network.target - Network. Jul 11 05:11:46.917029 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 05:11:46.920547 ignition[701]: Ignition 2.21.0 Jul 11 05:11:46.920559 ignition[701]: Stage: fetch-offline Jul 11 05:11:46.920587 ignition[701]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:11:46.920595 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:11:46.920772 ignition[701]: parsed url from cmdline: "" Jul 11 05:11:46.920775 ignition[701]: no config URL provided Jul 11 05:11:46.920779 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" Jul 11 05:11:46.920786 ignition[701]: no config at "/usr/lib/ignition/user.ign" Jul 11 05:11:46.920802 ignition[701]: op(1): [started] loading QEMU firmware config module Jul 11 05:11:46.920806 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 11 05:11:46.932573 ignition[701]: op(1): [finished] loading QEMU firmware config module Jul 11 05:11:46.970641 ignition[701]: parsing config with SHA512: 5577b1544c640b2072a359e2946d9097e776d9bd81fdb8e74eccb1bd30ffb3104d92ad9117ebf0f6179fe324e4fce2022a8f932f31d990ec53f7156de37e8fe4 Jul 11 05:11:46.974746 unknown[701]: fetched base config from "system" Jul 11 05:11:46.974757 unknown[701]: fetched user config from "qemu" Jul 11 05:11:46.975434 ignition[701]: fetch-offline: fetch-offline passed Jul 11 05:11:46.975495 ignition[701]: Ignition finished successfully Jul 11 05:11:46.978300 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 05:11:46.980379 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 11 05:11:46.981183 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 11 05:11:47.004743 ignition[809]: Ignition 2.21.0 Jul 11 05:11:47.004762 ignition[809]: Stage: kargs Jul 11 05:11:47.004887 ignition[809]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:11:47.004896 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:11:47.007011 ignition[809]: kargs: kargs passed Jul 11 05:11:47.007241 ignition[809]: Ignition finished successfully Jul 11 05:11:47.010548 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 11 05:11:47.012233 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 11 05:11:47.036052 ignition[817]: Ignition 2.21.0 Jul 11 05:11:47.036068 ignition[817]: Stage: disks Jul 11 05:11:47.036197 ignition[817]: no configs at "/usr/lib/ignition/base.d" Jul 11 05:11:47.036205 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:11:47.037448 ignition[817]: disks: disks passed Jul 11 05:11:47.037493 ignition[817]: Ignition finished successfully Jul 11 05:11:47.039085 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 11 05:11:47.040190 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 11 05:11:47.041012 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 11 05:11:47.042409 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 05:11:47.043727 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 05:11:47.045053 systemd[1]: Reached target basic.target - Basic System. Jul 11 05:11:47.047144 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 11 05:11:47.065875 systemd-fsck[827]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 11 05:11:47.069600 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 11 05:11:47.071418 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 11 05:11:47.135986 kernel: EXT4-fs (vda9): mounted filesystem 1377db55-4b0b-44d7-86ad-f9343775ed75 r/w with ordered data mode. Quota mode: none. Jul 11 05:11:47.136484 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 11 05:11:47.137452 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 11 05:11:47.139787 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 05:11:47.141634 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 11 05:11:47.142410 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 11 05:11:47.142446 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 11 05:11:47.142468 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 05:11:47.159094 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 11 05:11:47.160770 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 11 05:11:47.165303 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (835) Jul 11 05:11:47.165337 kernel: BTRFS info (device vda6): first mount of filesystem 8b6d4331-e552-452c-ad36-39a2024f4534 Jul 11 05:11:47.166150 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 05:11:47.166979 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:11:47.168891 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 05:11:47.199834 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Jul 11 05:11:47.203821 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Jul 11 05:11:47.207321 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Jul 11 05:11:47.210129 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Jul 11 05:11:47.277542 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 11 05:11:47.280581 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 11 05:11:47.281835 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 11 05:11:47.301020 kernel: BTRFS info (device vda6): last unmount of filesystem 8b6d4331-e552-452c-ad36-39a2024f4534 Jul 11 05:11:47.323144 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 11 05:11:47.333914 ignition[950]: INFO : Ignition 2.21.0 Jul 11 05:11:47.333914 ignition[950]: INFO : Stage: mount Jul 11 05:11:47.335050 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:11:47.335050 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:11:47.336482 ignition[950]: INFO : mount: mount passed Jul 11 05:11:47.336482 ignition[950]: INFO : Ignition finished successfully Jul 11 05:11:47.337522 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 11 05:11:47.339008 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 11 05:11:47.872769 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 11 05:11:47.874883 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 11 05:11:47.893646 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (962) Jul 11 05:11:47.893680 kernel: BTRFS info (device vda6): first mount of filesystem 8b6d4331-e552-452c-ad36-39a2024f4534 Jul 11 05:11:47.893691 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 11 05:11:47.893701 kernel: BTRFS info (device vda6): using free-space-tree Jul 11 05:11:47.896376 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 11 05:11:47.922745 ignition[979]: INFO : Ignition 2.21.0 Jul 11 05:11:47.922745 ignition[979]: INFO : Stage: files Jul 11 05:11:47.924461 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:11:47.924461 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:11:47.925841 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Jul 11 05:11:47.926887 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 11 05:11:47.926887 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 11 05:11:47.929319 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 11 05:11:47.930249 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 11 05:11:47.930249 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 11 05:11:47.929829 unknown[979]: wrote ssh authorized keys file for user: core Jul 11 05:11:47.932817 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 11 05:11:47.932817 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 11 05:11:48.030472 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 11 05:11:48.128175 systemd-networkd[797]: eth0: Gained IPv6LL Jul 11 05:11:48.239270 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 11 05:11:48.240669 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 11 05:11:48.240669 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 11 05:11:48.240669 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 11 05:11:48.240669 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 11 05:11:48.240669 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 05:11:48.240669 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 11 05:11:48.240669 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 05:11:48.240669 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 11 05:11:48.250418 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 05:11:48.250418 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 11 05:11:48.250418 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 05:11:48.250418 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 05:11:48.250418 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 05:11:48.250418 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 11 05:11:48.608312 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 11 05:11:48.996373 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 11 05:11:48.996373 ignition[979]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 11 05:11:48.999245 ignition[979]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 05:11:48.999245 ignition[979]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 11 05:11:48.999245 ignition[979]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 11 05:11:48.999245 ignition[979]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 11 05:11:48.999245 ignition[979]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 05:11:48.999245 ignition[979]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 11 05:11:48.999245 ignition[979]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 11 05:11:48.999245 ignition[979]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 11 05:11:49.015132 ignition[979]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 05:11:49.018894 ignition[979]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 11 05:11:49.020028 ignition[979]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 11 05:11:49.020028 ignition[979]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 11 05:11:49.020028 ignition[979]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 11 05:11:49.020028 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 11 05:11:49.020028 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 11 05:11:49.020028 ignition[979]: INFO : files: files passed Jul 11 05:11:49.020028 ignition[979]: INFO : Ignition finished successfully Jul 11 05:11:49.021329 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 11 05:11:49.026206 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 11 05:11:49.028117 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 11 05:11:49.037500 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 11 05:11:49.037583 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 11 05:11:49.041013 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Jul 11 05:11:49.043721 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:11:49.043721 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:11:49.046118 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 11 05:11:49.047907 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 05:11:49.048946 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 11 05:11:49.052140 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 11 05:11:49.106535 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 11 05:11:49.106651 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 11 05:11:49.108181 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 11 05:11:49.109591 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 11 05:11:49.110827 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 11 05:11:49.111497 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 11 05:11:49.131377 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 05:11:49.133263 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 11 05:11:49.150147 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:11:49.151039 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:11:49.152496 systemd[1]: Stopped target timers.target - Timer Units. Jul 11 05:11:49.153744 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 11 05:11:49.153846 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 11 05:11:49.155691 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 11 05:11:49.157060 systemd[1]: Stopped target basic.target - Basic System. Jul 11 05:11:49.158274 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 11 05:11:49.159506 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 11 05:11:49.160936 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 11 05:11:49.162440 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 11 05:11:49.163743 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 11 05:11:49.165026 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 11 05:11:49.166527 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 11 05:11:49.167865 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 11 05:11:49.169089 systemd[1]: Stopped target swap.target - Swaps. Jul 11 05:11:49.170262 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 11 05:11:49.170362 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 11 05:11:49.172001 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:11:49.173363 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:11:49.174705 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 11 05:11:49.174792 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:11:49.176182 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 11 05:11:49.176284 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 11 05:11:49.178282 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 11 05:11:49.178387 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 11 05:11:49.179715 systemd[1]: Stopped target paths.target - Path Units. Jul 11 05:11:49.180792 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 11 05:11:49.184014 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:11:49.184899 systemd[1]: Stopped target slices.target - Slice Units. Jul 11 05:11:49.186410 systemd[1]: Stopped target sockets.target - Socket Units. Jul 11 05:11:49.187509 systemd[1]: iscsid.socket: Deactivated successfully. Jul 11 05:11:49.187589 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 11 05:11:49.188790 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 11 05:11:49.188858 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 11 05:11:49.190013 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 11 05:11:49.190122 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 11 05:11:49.191374 systemd[1]: ignition-files.service: Deactivated successfully. Jul 11 05:11:49.191467 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 11 05:11:49.193169 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 11 05:11:49.194986 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 11 05:11:49.195693 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 11 05:11:49.195794 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:11:49.197092 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 11 05:11:49.197200 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 11 05:11:49.201338 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 11 05:11:49.207143 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 11 05:11:49.215280 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 11 05:11:49.219212 ignition[1034]: INFO : Ignition 2.21.0 Jul 11 05:11:49.219212 ignition[1034]: INFO : Stage: umount Jul 11 05:11:49.221063 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 11 05:11:49.221063 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 11 05:11:49.221063 ignition[1034]: INFO : umount: umount passed Jul 11 05:11:49.221063 ignition[1034]: INFO : Ignition finished successfully Jul 11 05:11:49.222133 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 11 05:11:49.223029 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 11 05:11:49.223935 systemd[1]: Stopped target network.target - Network. Jul 11 05:11:49.224933 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 11 05:11:49.224997 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 11 05:11:49.226366 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 11 05:11:49.226409 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 11 05:11:49.227581 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 11 05:11:49.227624 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 11 05:11:49.228708 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 11 05:11:49.228743 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 11 05:11:49.230093 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 11 05:11:49.231351 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 11 05:11:49.242117 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 11 05:11:49.242910 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 11 05:11:49.246086 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 11 05:11:49.246313 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 11 05:11:49.247300 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 11 05:11:49.249452 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 11 05:11:49.250019 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 11 05:11:49.251225 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 11 05:11:49.251260 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:11:49.253348 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 11 05:11:49.254528 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 11 05:11:49.254593 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 11 05:11:49.255954 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 11 05:11:49.256003 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:11:49.258149 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 11 05:11:49.258212 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 11 05:11:49.259476 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 11 05:11:49.259510 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:11:49.261485 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:11:49.265492 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 11 05:11:49.265544 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 11 05:11:49.274086 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 11 05:11:49.274848 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 11 05:11:49.279576 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 11 05:11:49.279688 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 11 05:11:49.281034 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 11 05:11:49.281077 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 11 05:11:49.283318 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 11 05:11:49.283440 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:11:49.284871 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 11 05:11:49.284930 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 11 05:11:49.285840 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 11 05:11:49.285866 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:11:49.287155 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 11 05:11:49.287191 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 11 05:11:49.289303 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 11 05:11:49.289341 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 11 05:11:49.291359 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 11 05:11:49.291408 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 11 05:11:49.294208 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 11 05:11:49.294937 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 11 05:11:49.295003 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:11:49.297040 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 11 05:11:49.297078 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:11:49.299260 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 11 05:11:49.299297 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:11:49.301594 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 11 05:11:49.301628 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:11:49.303298 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 11 05:11:49.303334 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:11:49.306399 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 11 05:11:49.306443 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 11 05:11:49.306473 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 11 05:11:49.306503 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 11 05:11:49.312476 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 11 05:11:49.312568 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 11 05:11:49.313961 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 11 05:11:49.316137 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 11 05:11:49.332337 systemd[1]: Switching root. Jul 11 05:11:49.370710 systemd-journald[244]: Journal stopped Jul 11 05:11:50.085561 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 11 05:11:50.085604 kernel: SELinux: policy capability network_peer_controls=1 Jul 11 05:11:50.085619 kernel: SELinux: policy capability open_perms=1 Jul 11 05:11:50.085629 kernel: SELinux: policy capability extended_socket_class=1 Jul 11 05:11:50.085639 kernel: SELinux: policy capability always_check_network=0 Jul 11 05:11:50.085651 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 11 05:11:50.085661 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 11 05:11:50.085670 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 11 05:11:50.085683 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 11 05:11:50.085693 kernel: SELinux: policy capability userspace_initial_context=0 Jul 11 05:11:50.085702 kernel: audit: type=1403 audit(1752210709.554:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 11 05:11:50.085715 systemd[1]: Successfully loaded SELinux policy in 55.859ms. Jul 11 05:11:50.085736 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.015ms. Jul 11 05:11:50.085747 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 11 05:11:50.085758 systemd[1]: Detected virtualization kvm. Jul 11 05:11:50.085768 systemd[1]: Detected architecture arm64. Jul 11 05:11:50.085777 systemd[1]: Detected first boot. Jul 11 05:11:50.085787 systemd[1]: Initializing machine ID from VM UUID. Jul 11 05:11:50.085798 kernel: NET: Registered PF_VSOCK protocol family Jul 11 05:11:50.085809 zram_generator::config[1081]: No configuration found. Jul 11 05:11:50.085826 systemd[1]: Populated /etc with preset unit settings. Jul 11 05:11:50.085836 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 11 05:11:50.085846 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 11 05:11:50.085857 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 11 05:11:50.085867 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 11 05:11:50.085878 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 11 05:11:50.085890 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 11 05:11:50.085899 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 11 05:11:50.085909 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 11 05:11:50.085919 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 11 05:11:50.085929 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 11 05:11:50.085939 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 11 05:11:50.085949 systemd[1]: Created slice user.slice - User and Session Slice. Jul 11 05:11:50.085959 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 11 05:11:50.085986 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 11 05:11:50.086000 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 11 05:11:50.086010 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 11 05:11:50.086021 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 11 05:11:50.086031 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 11 05:11:50.086041 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 11 05:11:50.086051 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 11 05:11:50.086062 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 11 05:11:50.086073 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 11 05:11:50.086083 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 11 05:11:50.086098 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 11 05:11:50.086109 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 11 05:11:50.086120 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 11 05:11:50.086130 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 11 05:11:50.086140 systemd[1]: Reached target slices.target - Slice Units. Jul 11 05:11:50.086150 systemd[1]: Reached target swap.target - Swaps. Jul 11 05:11:50.086165 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 11 05:11:50.086177 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 11 05:11:50.086187 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 11 05:11:50.086197 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 11 05:11:50.086208 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 11 05:11:50.086218 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 11 05:11:50.086227 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 11 05:11:50.086238 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 11 05:11:50.086248 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 11 05:11:50.086258 systemd[1]: Mounting media.mount - External Media Directory... Jul 11 05:11:50.086269 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 11 05:11:50.086279 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 11 05:11:50.086289 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 11 05:11:50.086299 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 11 05:11:50.086309 systemd[1]: Reached target machines.target - Containers. Jul 11 05:11:50.086319 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 11 05:11:50.086329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:11:50.086339 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 11 05:11:50.086350 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 11 05:11:50.086361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:11:50.086370 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 05:11:50.086380 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:11:50.086390 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 11 05:11:50.086400 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:11:50.086411 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 11 05:11:50.086421 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 11 05:11:50.086430 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 11 05:11:50.086441 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 11 05:11:50.086451 systemd[1]: Stopped systemd-fsck-usr.service. Jul 11 05:11:50.086461 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:11:50.086471 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 11 05:11:50.086481 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 11 05:11:50.086490 kernel: fuse: init (API version 7.41) Jul 11 05:11:50.086500 kernel: loop: module loaded Jul 11 05:11:50.086510 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 11 05:11:50.086520 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 11 05:11:50.086531 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 11 05:11:50.086540 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 11 05:11:50.086550 kernel: ACPI: bus type drm_connector registered Jul 11 05:11:50.086559 systemd[1]: verity-setup.service: Deactivated successfully. Jul 11 05:11:50.086569 systemd[1]: Stopped verity-setup.service. Jul 11 05:11:50.086580 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 11 05:11:50.086608 systemd-journald[1148]: Collecting audit messages is disabled. Jul 11 05:11:50.086632 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 11 05:11:50.086642 systemd-journald[1148]: Journal started Jul 11 05:11:50.086664 systemd-journald[1148]: Runtime Journal (/run/log/journal/fae4f22f28c04950a3ccf8f9f37ae141) is 6M, max 48.5M, 42.4M free. Jul 11 05:11:49.901122 systemd[1]: Queued start job for default target multi-user.target. Jul 11 05:11:49.922832 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 11 05:11:49.923202 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 11 05:11:50.088317 systemd[1]: Started systemd-journald.service - Journal Service. Jul 11 05:11:50.088865 systemd[1]: Mounted media.mount - External Media Directory. Jul 11 05:11:50.089749 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 11 05:11:50.090696 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 11 05:11:50.091692 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 11 05:11:50.093262 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 11 05:11:50.094430 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 11 05:11:50.095592 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 11 05:11:50.095772 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 11 05:11:50.096861 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:11:50.097044 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:11:50.098144 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 05:11:50.098302 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 05:11:50.099271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:11:50.099434 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:11:50.100514 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 11 05:11:50.100656 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 11 05:11:50.103287 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:11:50.103437 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:11:50.104638 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 11 05:11:50.105758 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 11 05:11:50.106921 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 11 05:11:50.108270 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 11 05:11:50.119535 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 11 05:11:50.121516 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 11 05:11:50.123296 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 11 05:11:50.124166 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 11 05:11:50.124193 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 11 05:11:50.125698 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 11 05:11:50.135705 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 11 05:11:50.136613 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:11:50.137927 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 11 05:11:50.139577 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 11 05:11:50.140522 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 05:11:50.142140 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 11 05:11:50.142903 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 05:11:50.143779 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 11 05:11:50.146235 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 11 05:11:50.149838 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 11 05:11:50.149960 systemd-journald[1148]: Time spent on flushing to /var/log/journal/fae4f22f28c04950a3ccf8f9f37ae141 is 15.597ms for 888 entries. Jul 11 05:11:50.149960 systemd-journald[1148]: System Journal (/var/log/journal/fae4f22f28c04950a3ccf8f9f37ae141) is 8M, max 195.6M, 187.6M free. Jul 11 05:11:50.175525 systemd-journald[1148]: Received client request to flush runtime journal. Jul 11 05:11:50.175574 kernel: loop0: detected capacity change from 0 to 105936 Jul 11 05:11:50.154064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 11 05:11:50.155319 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 11 05:11:50.156813 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 11 05:11:50.158097 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 11 05:11:50.161737 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 11 05:11:50.164186 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 11 05:11:50.177471 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 11 05:11:50.183830 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 11 05:11:50.184422 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jul 11 05:11:50.184432 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Jul 11 05:11:50.188424 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 11 05:11:50.191002 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 11 05:11:50.193111 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 11 05:11:50.207034 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 11 05:11:50.213009 kernel: loop1: detected capacity change from 0 to 207008 Jul 11 05:11:50.228295 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 11 05:11:50.230915 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 11 05:11:50.234987 kernel: loop2: detected capacity change from 0 to 134232 Jul 11 05:11:50.249552 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 11 05:11:50.249804 systemd-tmpfiles[1219]: ACLs are not supported, ignoring. Jul 11 05:11:50.253025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 11 05:11:50.263997 kernel: loop3: detected capacity change from 0 to 105936 Jul 11 05:11:50.270002 kernel: loop4: detected capacity change from 0 to 207008 Jul 11 05:11:50.276985 kernel: loop5: detected capacity change from 0 to 134232 Jul 11 05:11:50.283150 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 11 05:11:50.283588 (sd-merge)[1223]: Merged extensions into '/usr'. Jul 11 05:11:50.287105 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Jul 11 05:11:50.287123 systemd[1]: Reloading... Jul 11 05:11:50.326992 zram_generator::config[1249]: No configuration found. Jul 11 05:11:50.411422 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:11:50.421983 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 11 05:11:50.475453 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 11 05:11:50.475606 systemd[1]: Reloading finished in 188 ms. Jul 11 05:11:50.508514 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 11 05:11:50.509686 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 11 05:11:50.524080 systemd[1]: Starting ensure-sysext.service... Jul 11 05:11:50.525589 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 11 05:11:50.540149 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 11 05:11:50.540459 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 11 05:11:50.540701 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 11 05:11:50.540879 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 11 05:11:50.540909 systemd[1]: Reload requested from client PID 1284 ('systemctl') (unit ensure-sysext.service)... Jul 11 05:11:50.540918 systemd[1]: Reloading... Jul 11 05:11:50.541788 systemd-tmpfiles[1285]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 11 05:11:50.542142 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jul 11 05:11:50.542261 systemd-tmpfiles[1285]: ACLs are not supported, ignoring. Jul 11 05:11:50.544800 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 05:11:50.544809 systemd-tmpfiles[1285]: Skipping /boot Jul 11 05:11:50.550500 systemd-tmpfiles[1285]: Detected autofs mount point /boot during canonicalization of boot. Jul 11 05:11:50.550591 systemd-tmpfiles[1285]: Skipping /boot Jul 11 05:11:50.582989 zram_generator::config[1312]: No configuration found. Jul 11 05:11:50.651574 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:11:50.713182 systemd[1]: Reloading finished in 171 ms. Jul 11 05:11:50.735483 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 11 05:11:50.736700 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 11 05:11:50.752956 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 05:11:50.754860 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 11 05:11:50.756745 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 11 05:11:50.759085 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 11 05:11:50.765569 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 11 05:11:50.768463 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 11 05:11:50.783697 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 11 05:11:50.786010 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 11 05:11:50.790988 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:11:50.792405 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:11:50.794854 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:11:50.805304 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:11:50.806109 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:11:50.806297 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:11:50.807721 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 11 05:11:50.811139 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:11:50.811295 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:11:50.813382 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Jul 11 05:11:50.814673 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:11:50.814824 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:11:50.822328 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 11 05:11:50.824783 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:11:50.824939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:11:50.826330 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 11 05:11:50.831536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:11:50.832649 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:11:50.838352 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:11:50.840752 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:11:50.841927 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:11:50.842049 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:11:50.842147 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 05:11:50.842932 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 11 05:11:50.844774 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 11 05:11:50.846013 augenrules[1390]: No rules Jul 11 05:11:50.846453 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 11 05:11:50.847845 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 05:11:50.848049 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 05:11:50.849479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:11:50.849631 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:11:50.850897 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:11:50.851041 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:11:50.852411 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:11:50.853140 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:11:50.865752 systemd[1]: Finished ensure-sysext.service. Jul 11 05:11:50.870163 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 05:11:50.871044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 11 05:11:50.872158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 11 05:11:50.876126 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 11 05:11:50.877679 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 11 05:11:50.891644 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 11 05:11:50.892525 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 11 05:11:50.892566 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 11 05:11:50.893964 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 11 05:11:50.897930 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 11 05:11:50.898749 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 11 05:11:50.899248 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 11 05:11:50.902033 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 11 05:11:50.903203 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 11 05:11:50.903357 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 11 05:11:50.904444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 11 05:11:50.904587 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 11 05:11:50.906321 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 11 05:11:50.906456 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 11 05:11:50.913445 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 11 05:11:50.913500 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 11 05:11:50.925890 augenrules[1430]: /sbin/augenrules: No change Jul 11 05:11:50.934031 augenrules[1459]: No rules Jul 11 05:11:50.935485 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 05:11:50.935723 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 05:11:50.955149 systemd-resolved[1351]: Positive Trust Anchors: Jul 11 05:11:50.955805 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 11 05:11:50.955907 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 11 05:11:50.961943 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 11 05:11:50.965705 systemd-resolved[1351]: Defaulting to hostname 'linux'. Jul 11 05:11:50.972111 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 11 05:11:50.974142 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 11 05:11:50.975108 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 11 05:11:50.977203 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 11 05:11:51.006618 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 11 05:11:51.011288 systemd-networkd[1435]: lo: Link UP Jul 11 05:11:51.011296 systemd-networkd[1435]: lo: Gained carrier Jul 11 05:11:51.012028 systemd-networkd[1435]: Enumeration completed Jul 11 05:11:51.012131 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 11 05:11:51.013122 systemd[1]: Reached target network.target - Network. Jul 11 05:11:51.014489 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:11:51.014499 systemd-networkd[1435]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 11 05:11:51.015040 systemd-networkd[1435]: eth0: Link UP Jul 11 05:11:51.015171 systemd-networkd[1435]: eth0: Gained carrier Jul 11 05:11:51.015189 systemd-networkd[1435]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 11 05:11:51.015589 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 11 05:11:51.017364 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 11 05:11:51.021170 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 11 05:11:51.022073 systemd[1]: Reached target sysinit.target - System Initialization. Jul 11 05:11:51.023031 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 11 05:11:51.024025 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 11 05:11:51.024897 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 11 05:11:51.025790 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 11 05:11:51.025817 systemd[1]: Reached target paths.target - Path Units. Jul 11 05:11:51.026557 systemd[1]: Reached target time-set.target - System Time Set. Jul 11 05:11:51.027384 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 11 05:11:51.028234 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 11 05:11:51.029087 systemd[1]: Reached target timers.target - Timer Units. Jul 11 05:11:51.030342 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 11 05:11:51.032073 systemd-networkd[1435]: eth0: DHCPv4 address 10.0.0.147/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 11 05:11:51.032216 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 11 05:11:51.034556 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 11 05:11:51.035631 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 11 05:11:51.036528 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 11 05:11:51.037559 systemd-timesyncd[1436]: Network configuration changed, trying to establish connection. Jul 11 05:11:51.039411 systemd-timesyncd[1436]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 11 05:11:51.039475 systemd-timesyncd[1436]: Initial clock synchronization to Fri 2025-07-11 05:11:51.252745 UTC. Jul 11 05:11:51.040404 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 11 05:11:51.041424 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 11 05:11:51.042920 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 11 05:11:51.046857 systemd[1]: Reached target sockets.target - Socket Units. Jul 11 05:11:51.047598 systemd[1]: Reached target basic.target - Basic System. Jul 11 05:11:51.049125 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 11 05:11:51.049152 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 11 05:11:51.051172 systemd[1]: Starting containerd.service - containerd container runtime... Jul 11 05:11:51.053910 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 11 05:11:51.058847 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 11 05:11:51.062548 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 11 05:11:51.067174 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 11 05:11:51.067869 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 11 05:11:51.070613 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 11 05:11:51.073158 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 11 05:11:51.077186 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 11 05:11:51.078610 jq[1493]: false Jul 11 05:11:51.079113 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 11 05:11:51.084510 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 11 05:11:51.085848 extend-filesystems[1496]: Found /dev/vda6 Jul 11 05:11:51.086070 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 11 05:11:51.086438 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 11 05:11:51.087520 systemd[1]: Starting update-engine.service - Update Engine... Jul 11 05:11:51.090314 extend-filesystems[1496]: Found /dev/vda9 Jul 11 05:11:51.092897 extend-filesystems[1496]: Checking size of /dev/vda9 Jul 11 05:11:51.092899 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 11 05:11:51.094661 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 11 05:11:51.099490 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 11 05:11:51.100592 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 11 05:11:51.100799 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 11 05:11:51.101068 systemd[1]: motdgen.service: Deactivated successfully. Jul 11 05:11:51.101226 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 11 05:11:51.102659 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 11 05:11:51.102800 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 11 05:11:51.104512 extend-filesystems[1496]: Resized partition /dev/vda9 Jul 11 05:11:51.111435 extend-filesystems[1527]: resize2fs 1.47.2 (1-Jan-2025) Jul 11 05:11:51.126053 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 11 05:11:51.129646 jq[1515]: true Jul 11 05:11:51.138519 (ntainerd)[1533]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 11 05:11:51.144588 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 11 05:11:51.147855 dbus-daemon[1491]: [system] SELinux support is enabled Jul 11 05:11:51.148019 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 11 05:11:51.152671 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 11 05:11:51.152730 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 11 05:11:51.153799 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 11 05:11:51.153833 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 11 05:11:51.163887 jq[1537]: true Jul 11 05:11:51.168641 update_engine[1509]: I20250711 05:11:51.166874 1509 main.cc:92] Flatcar Update Engine starting Jul 11 05:11:51.169009 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 11 05:11:51.171036 tar[1521]: linux-arm64/LICENSE Jul 11 05:11:51.179076 update_engine[1509]: I20250711 05:11:51.174436 1509 update_check_scheduler.cc:74] Next update check in 10m35s Jul 11 05:11:51.174372 systemd[1]: Started update-engine.service - Update Engine. Jul 11 05:11:51.178156 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 11 05:11:51.179987 extend-filesystems[1527]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 11 05:11:51.179987 extend-filesystems[1527]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 11 05:11:51.179987 extend-filesystems[1527]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 11 05:11:51.183381 tar[1521]: linux-arm64/helm Jul 11 05:11:51.182231 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 11 05:11:51.183444 extend-filesystems[1496]: Resized filesystem in /dev/vda9 Jul 11 05:11:51.182423 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 11 05:11:51.225565 systemd-logind[1508]: Watching system buttons on /dev/input/event0 (Power Button) Jul 11 05:11:51.225755 systemd-logind[1508]: New seat seat0. Jul 11 05:11:51.226922 systemd[1]: Started systemd-logind.service - User Login Management. Jul 11 05:11:51.254837 bash[1562]: Updated "/home/core/.ssh/authorized_keys" Jul 11 05:11:51.256220 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 11 05:11:51.257741 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 11 05:11:51.266568 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 11 05:11:51.269873 locksmithd[1542]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 11 05:11:51.371944 containerd[1533]: time="2025-07-11T05:11:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 11 05:11:51.373504 containerd[1533]: time="2025-07-11T05:11:51.373475040Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.388905160Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.64µs" Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.388941120Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.388963560Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.389126200Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.389148800Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.389174040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.389222640Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.389238320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.389461120Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.389479560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.389491440Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 11 05:11:51.390012 containerd[1533]: time="2025-07-11T05:11:51.389502920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 11 05:11:51.390251 containerd[1533]: time="2025-07-11T05:11:51.389570720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 11 05:11:51.390251 containerd[1533]: time="2025-07-11T05:11:51.389741400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 05:11:51.390251 containerd[1533]: time="2025-07-11T05:11:51.389771040Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 11 05:11:51.390251 containerd[1533]: time="2025-07-11T05:11:51.389783840Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 11 05:11:51.390251 containerd[1533]: time="2025-07-11T05:11:51.389808400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 11 05:11:51.390949 containerd[1533]: time="2025-07-11T05:11:51.390915080Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 11 05:11:51.391072 containerd[1533]: time="2025-07-11T05:11:51.391022920Z" level=info msg="metadata content store policy set" policy=shared Jul 11 05:11:51.393943 containerd[1533]: time="2025-07-11T05:11:51.393909080Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 11 05:11:51.393943 containerd[1533]: time="2025-07-11T05:11:51.393958240Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 11 05:11:51.393943 containerd[1533]: time="2025-07-11T05:11:51.393986120Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 11 05:11:51.393943 containerd[1533]: time="2025-07-11T05:11:51.393999560Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 11 05:11:51.394125 containerd[1533]: time="2025-07-11T05:11:51.394011800Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 11 05:11:51.394125 containerd[1533]: time="2025-07-11T05:11:51.394023200Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 11 05:11:51.394125 containerd[1533]: time="2025-07-11T05:11:51.394034960Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 11 05:11:51.394125 containerd[1533]: time="2025-07-11T05:11:51.394047040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 11 05:11:51.394125 containerd[1533]: time="2025-07-11T05:11:51.394057440Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 11 05:11:51.394125 containerd[1533]: time="2025-07-11T05:11:51.394068000Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 11 05:11:51.394125 containerd[1533]: time="2025-07-11T05:11:51.394076840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 11 05:11:51.394125 containerd[1533]: time="2025-07-11T05:11:51.394094680Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 11 05:11:51.394248 containerd[1533]: time="2025-07-11T05:11:51.394194800Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 11 05:11:51.394248 containerd[1533]: time="2025-07-11T05:11:51.394213520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 11 05:11:51.394248 containerd[1533]: time="2025-07-11T05:11:51.394227760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 11 05:11:51.394248 containerd[1533]: time="2025-07-11T05:11:51.394237400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 11 05:11:51.394310 containerd[1533]: time="2025-07-11T05:11:51.394252120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 11 05:11:51.394310 containerd[1533]: time="2025-07-11T05:11:51.394262920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 11 05:11:51.394310 containerd[1533]: time="2025-07-11T05:11:51.394273800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 11 05:11:51.394310 containerd[1533]: time="2025-07-11T05:11:51.394283000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 11 05:11:51.394310 containerd[1533]: time="2025-07-11T05:11:51.394293080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 11 05:11:51.394310 containerd[1533]: time="2025-07-11T05:11:51.394302520Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 11 05:11:51.394406 containerd[1533]: time="2025-07-11T05:11:51.394311880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 11 05:11:51.394766 containerd[1533]: time="2025-07-11T05:11:51.394482320Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 11 05:11:51.394766 containerd[1533]: time="2025-07-11T05:11:51.394505280Z" level=info msg="Start snapshots syncer" Jul 11 05:11:51.394766 containerd[1533]: time="2025-07-11T05:11:51.394524440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 11 05:11:51.395055 containerd[1533]: time="2025-07-11T05:11:51.394726280Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 11 05:11:51.395055 containerd[1533]: time="2025-07-11T05:11:51.394795000Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 11 05:11:51.395540 containerd[1533]: time="2025-07-11T05:11:51.395508120Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 11 05:11:51.395710 containerd[1533]: time="2025-07-11T05:11:51.395625600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 11 05:11:51.395710 containerd[1533]: time="2025-07-11T05:11:51.395654080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 11 05:11:51.395710 containerd[1533]: time="2025-07-11T05:11:51.395672880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 11 05:11:51.395710 containerd[1533]: time="2025-07-11T05:11:51.395684440Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 11 05:11:51.395710 containerd[1533]: time="2025-07-11T05:11:51.395695560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 11 05:11:51.395710 containerd[1533]: time="2025-07-11T05:11:51.395705120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 11 05:11:51.395710 containerd[1533]: time="2025-07-11T05:11:51.395715080Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 11 05:11:51.395847 containerd[1533]: time="2025-07-11T05:11:51.395740560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 11 05:11:51.395847 containerd[1533]: time="2025-07-11T05:11:51.395751520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 11 05:11:51.395847 containerd[1533]: time="2025-07-11T05:11:51.395761000Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 11 05:11:51.395847 containerd[1533]: time="2025-07-11T05:11:51.395792480Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 05:11:51.395847 containerd[1533]: time="2025-07-11T05:11:51.395804440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 11 05:11:51.395847 containerd[1533]: time="2025-07-11T05:11:51.395812840Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 05:11:51.395847 containerd[1533]: time="2025-07-11T05:11:51.395821520Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 11 05:11:51.395847 containerd[1533]: time="2025-07-11T05:11:51.395828560Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 11 05:11:51.395847 containerd[1533]: time="2025-07-11T05:11:51.395840080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 11 05:11:51.395847 containerd[1533]: time="2025-07-11T05:11:51.395850080Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 11 05:11:51.396028 containerd[1533]: time="2025-07-11T05:11:51.395922560Z" level=info msg="runtime interface created" Jul 11 05:11:51.396028 containerd[1533]: time="2025-07-11T05:11:51.395927360Z" level=info msg="created NRI interface" Jul 11 05:11:51.396028 containerd[1533]: time="2025-07-11T05:11:51.395935560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 11 05:11:51.396028 containerd[1533]: time="2025-07-11T05:11:51.395946040Z" level=info msg="Connect containerd service" Jul 11 05:11:51.396028 containerd[1533]: time="2025-07-11T05:11:51.395992000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 11 05:11:51.396719 containerd[1533]: time="2025-07-11T05:11:51.396679240Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 11 05:11:51.476290 tar[1521]: linux-arm64/README.md Jul 11 05:11:51.492050 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 11 05:11:51.497568 containerd[1533]: time="2025-07-11T05:11:51.497510720Z" level=info msg="Start subscribing containerd event" Jul 11 05:11:51.497656 containerd[1533]: time="2025-07-11T05:11:51.497590000Z" level=info msg="Start recovering state" Jul 11 05:11:51.497656 containerd[1533]: time="2025-07-11T05:11:51.497664240Z" level=info msg="Start event monitor" Jul 11 05:11:51.497656 containerd[1533]: time="2025-07-11T05:11:51.497676120Z" level=info msg="Start cni network conf syncer for default" Jul 11 05:11:51.497856 containerd[1533]: time="2025-07-11T05:11:51.497685480Z" level=info msg="Start streaming server" Jul 11 05:11:51.497856 containerd[1533]: time="2025-07-11T05:11:51.497695760Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 11 05:11:51.497856 containerd[1533]: time="2025-07-11T05:11:51.497703000Z" level=info msg="runtime interface starting up..." Jul 11 05:11:51.497856 containerd[1533]: time="2025-07-11T05:11:51.497708400Z" level=info msg="starting plugins..." Jul 11 05:11:51.497856 containerd[1533]: time="2025-07-11T05:11:51.497719400Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 11 05:11:51.497856 containerd[1533]: time="2025-07-11T05:11:51.497786480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 11 05:11:51.497856 containerd[1533]: time="2025-07-11T05:11:51.497832240Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 11 05:11:51.497989 containerd[1533]: time="2025-07-11T05:11:51.497877040Z" level=info msg="containerd successfully booted in 0.126265s" Jul 11 05:11:51.498063 systemd[1]: Started containerd.service - containerd container runtime. Jul 11 05:11:51.971115 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 11 05:11:51.989541 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 11 05:11:51.993072 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 11 05:11:52.008203 systemd[1]: issuegen.service: Deactivated successfully. Jul 11 05:11:52.008439 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 11 05:11:52.010579 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 11 05:11:52.035069 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 11 05:11:52.039361 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 11 05:11:52.041139 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 11 05:11:52.042136 systemd[1]: Reached target getty.target - Login Prompts. Jul 11 05:11:52.800994 systemd-networkd[1435]: eth0: Gained IPv6LL Jul 11 05:11:52.803759 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 11 05:11:52.805197 systemd[1]: Reached target network-online.target - Network is Online. Jul 11 05:11:52.807200 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 11 05:11:52.809528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:11:52.811382 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 11 05:11:52.836948 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 11 05:11:52.837211 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 11 05:11:52.839403 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 11 05:11:52.844417 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 11 05:11:53.382927 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:11:53.384215 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 11 05:11:53.385134 systemd[1]: Startup finished in 2.036s (kernel) + 4.920s (initrd) + 3.899s (userspace) = 10.856s. Jul 11 05:11:53.387103 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 05:11:53.796434 kubelet[1635]: E0711 05:11:53.796307 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 05:11:53.798529 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 05:11:53.798665 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 05:11:53.798959 systemd[1]: kubelet.service: Consumed 783ms CPU time, 255M memory peak. Jul 11 05:11:57.048215 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 11 05:11:57.049166 systemd[1]: Started sshd@0-10.0.0.147:22-10.0.0.1:35350.service - OpenSSH per-connection server daemon (10.0.0.1:35350). Jul 11 05:11:57.116791 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 35350 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:11:57.118800 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:11:57.125064 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 11 05:11:57.125899 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 11 05:11:57.132154 systemd-logind[1508]: New session 1 of user core. Jul 11 05:11:57.148054 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 11 05:11:57.150410 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 11 05:11:57.172772 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 11 05:11:57.174690 systemd-logind[1508]: New session c1 of user core. Jul 11 05:11:57.288204 systemd[1653]: Queued start job for default target default.target. Jul 11 05:11:57.299968 systemd[1653]: Created slice app.slice - User Application Slice. Jul 11 05:11:57.300019 systemd[1653]: Reached target paths.target - Paths. Jul 11 05:11:57.300054 systemd[1653]: Reached target timers.target - Timers. Jul 11 05:11:57.301138 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 11 05:11:57.309763 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 11 05:11:57.309824 systemd[1653]: Reached target sockets.target - Sockets. Jul 11 05:11:57.309858 systemd[1653]: Reached target basic.target - Basic System. Jul 11 05:11:57.309893 systemd[1653]: Reached target default.target - Main User Target. Jul 11 05:11:57.309916 systemd[1653]: Startup finished in 130ms. Jul 11 05:11:57.310108 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 11 05:11:57.311359 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 11 05:11:57.381530 systemd[1]: Started sshd@1-10.0.0.147:22-10.0.0.1:35356.service - OpenSSH per-connection server daemon (10.0.0.1:35356). Jul 11 05:11:57.426789 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 35356 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:11:57.427977 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:11:57.432033 systemd-logind[1508]: New session 2 of user core. Jul 11 05:11:57.440125 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 11 05:11:57.492081 sshd[1667]: Connection closed by 10.0.0.1 port 35356 Jul 11 05:11:57.492483 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jul 11 05:11:57.503845 systemd[1]: sshd@1-10.0.0.147:22-10.0.0.1:35356.service: Deactivated successfully. Jul 11 05:11:57.505297 systemd[1]: session-2.scope: Deactivated successfully. Jul 11 05:11:57.507573 systemd-logind[1508]: Session 2 logged out. Waiting for processes to exit. Jul 11 05:11:57.509011 systemd[1]: Started sshd@2-10.0.0.147:22-10.0.0.1:35362.service - OpenSSH per-connection server daemon (10.0.0.1:35362). Jul 11 05:11:57.510029 systemd-logind[1508]: Removed session 2. Jul 11 05:11:57.565405 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 35362 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:11:57.566388 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:11:57.569969 systemd-logind[1508]: New session 3 of user core. Jul 11 05:11:57.587140 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 11 05:11:57.634705 sshd[1676]: Connection closed by 10.0.0.1 port 35362 Jul 11 05:11:57.634985 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Jul 11 05:11:57.646142 systemd[1]: sshd@2-10.0.0.147:22-10.0.0.1:35362.service: Deactivated successfully. Jul 11 05:11:57.647445 systemd[1]: session-3.scope: Deactivated successfully. Jul 11 05:11:57.648135 systemd-logind[1508]: Session 3 logged out. Waiting for processes to exit. Jul 11 05:11:57.649851 systemd[1]: Started sshd@3-10.0.0.147:22-10.0.0.1:35366.service - OpenSSH per-connection server daemon (10.0.0.1:35366). Jul 11 05:11:57.650841 systemd-logind[1508]: Removed session 3. Jul 11 05:11:57.708031 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 35366 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:11:57.708997 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:11:57.713035 systemd-logind[1508]: New session 4 of user core. Jul 11 05:11:57.724122 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 11 05:11:57.775603 sshd[1685]: Connection closed by 10.0.0.1 port 35366 Jul 11 05:11:57.775841 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Jul 11 05:11:57.786740 systemd[1]: sshd@3-10.0.0.147:22-10.0.0.1:35366.service: Deactivated successfully. Jul 11 05:11:57.790154 systemd[1]: session-4.scope: Deactivated successfully. Jul 11 05:11:57.790860 systemd-logind[1508]: Session 4 logged out. Waiting for processes to exit. Jul 11 05:11:57.792959 systemd[1]: Started sshd@4-10.0.0.147:22-10.0.0.1:35372.service - OpenSSH per-connection server daemon (10.0.0.1:35372). Jul 11 05:11:57.793636 systemd-logind[1508]: Removed session 4. Jul 11 05:11:57.837166 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 35372 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:11:57.838452 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:11:57.842048 systemd-logind[1508]: New session 5 of user core. Jul 11 05:11:57.856176 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 11 05:11:57.914874 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 11 05:11:57.915179 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:11:57.927771 sudo[1695]: pam_unix(sudo:session): session closed for user root Jul 11 05:11:57.930281 sshd[1694]: Connection closed by 10.0.0.1 port 35372 Jul 11 05:11:57.929518 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Jul 11 05:11:57.938921 systemd[1]: sshd@4-10.0.0.147:22-10.0.0.1:35372.service: Deactivated successfully. Jul 11 05:11:57.940278 systemd[1]: session-5.scope: Deactivated successfully. Jul 11 05:11:57.942099 systemd-logind[1508]: Session 5 logged out. Waiting for processes to exit. Jul 11 05:11:57.944099 systemd[1]: Started sshd@5-10.0.0.147:22-10.0.0.1:35380.service - OpenSSH per-connection server daemon (10.0.0.1:35380). Jul 11 05:11:57.945229 systemd-logind[1508]: Removed session 5. Jul 11 05:11:57.997886 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 35380 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:11:57.998965 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:11:58.003039 systemd-logind[1508]: New session 6 of user core. Jul 11 05:11:58.011133 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 11 05:11:58.061225 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 11 05:11:58.061737 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:11:58.135762 sudo[1706]: pam_unix(sudo:session): session closed for user root Jul 11 05:11:58.141241 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 11 05:11:58.141493 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:11:58.149385 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 11 05:11:58.178311 augenrules[1728]: No rules Jul 11 05:11:58.179100 systemd[1]: audit-rules.service: Deactivated successfully. Jul 11 05:11:58.179357 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 11 05:11:58.180719 sudo[1705]: pam_unix(sudo:session): session closed for user root Jul 11 05:11:58.182040 sshd[1704]: Connection closed by 10.0.0.1 port 35380 Jul 11 05:11:58.182315 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Jul 11 05:11:58.189669 systemd[1]: sshd@5-10.0.0.147:22-10.0.0.1:35380.service: Deactivated successfully. Jul 11 05:11:58.192180 systemd[1]: session-6.scope: Deactivated successfully. Jul 11 05:11:58.192850 systemd-logind[1508]: Session 6 logged out. Waiting for processes to exit. Jul 11 05:11:58.194997 systemd[1]: Started sshd@6-10.0.0.147:22-10.0.0.1:35396.service - OpenSSH per-connection server daemon (10.0.0.1:35396). Jul 11 05:11:58.195482 systemd-logind[1508]: Removed session 6. Jul 11 05:11:58.246250 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 35396 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:11:58.247370 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:11:58.251075 systemd-logind[1508]: New session 7 of user core. Jul 11 05:11:58.261158 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 11 05:11:58.311070 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 11 05:11:58.311334 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 11 05:11:58.642297 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 11 05:11:58.655331 (dockerd)[1761]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 11 05:11:58.904100 dockerd[1761]: time="2025-07-11T05:11:58.903903184Z" level=info msg="Starting up" Jul 11 05:11:58.905202 dockerd[1761]: time="2025-07-11T05:11:58.905100711Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 11 05:11:58.915777 dockerd[1761]: time="2025-07-11T05:11:58.915748192Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 11 05:11:58.950060 dockerd[1761]: time="2025-07-11T05:11:58.949863215Z" level=info msg="Loading containers: start." Jul 11 05:11:58.958032 kernel: Initializing XFRM netlink socket Jul 11 05:11:59.157336 systemd-networkd[1435]: docker0: Link UP Jul 11 05:11:59.160420 dockerd[1761]: time="2025-07-11T05:11:59.160383856Z" level=info msg="Loading containers: done." Jul 11 05:11:59.173394 dockerd[1761]: time="2025-07-11T05:11:59.173345343Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 11 05:11:59.173510 dockerd[1761]: time="2025-07-11T05:11:59.173414136Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 11 05:11:59.173510 dockerd[1761]: time="2025-07-11T05:11:59.173490407Z" level=info msg="Initializing buildkit" Jul 11 05:11:59.194154 dockerd[1761]: time="2025-07-11T05:11:59.194124230Z" level=info msg="Completed buildkit initialization" Jul 11 05:11:59.198823 dockerd[1761]: time="2025-07-11T05:11:59.198790363Z" level=info msg="Daemon has completed initialization" Jul 11 05:11:59.199002 dockerd[1761]: time="2025-07-11T05:11:59.198876859Z" level=info msg="API listen on /run/docker.sock" Jul 11 05:11:59.199003 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 11 05:11:59.785654 containerd[1533]: time="2025-07-11T05:11:59.785600860Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 11 05:12:00.446332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount408772791.mount: Deactivated successfully. Jul 11 05:12:01.640007 containerd[1533]: time="2025-07-11T05:12:01.639171232Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:01.640691 containerd[1533]: time="2025-07-11T05:12:01.640665077Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 11 05:12:01.641495 containerd[1533]: time="2025-07-11T05:12:01.641468590Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:01.644529 containerd[1533]: time="2025-07-11T05:12:01.644499183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:01.645215 containerd[1533]: time="2025-07-11T05:12:01.645186572Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.859543454s" Jul 11 05:12:01.645325 containerd[1533]: time="2025-07-11T05:12:01.645309066Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 11 05:12:01.646101 containerd[1533]: time="2025-07-11T05:12:01.645966779Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 11 05:12:03.277752 containerd[1533]: time="2025-07-11T05:12:03.277699188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:03.278369 containerd[1533]: time="2025-07-11T05:12:03.278325369Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 11 05:12:03.278893 containerd[1533]: time="2025-07-11T05:12:03.278866592Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:03.281579 containerd[1533]: time="2025-07-11T05:12:03.281540350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:03.282864 containerd[1533]: time="2025-07-11T05:12:03.282825674Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 1.636638111s" Jul 11 05:12:03.282900 containerd[1533]: time="2025-07-11T05:12:03.282861130Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 11 05:12:03.283541 containerd[1533]: time="2025-07-11T05:12:03.283440466Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 11 05:12:04.049036 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 11 05:12:04.050669 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:12:04.172679 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:12:04.175948 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 05:12:04.212362 kubelet[2044]: E0711 05:12:04.212301 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 05:12:04.215198 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 05:12:04.215339 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 05:12:04.215671 systemd[1]: kubelet.service: Consumed 147ms CPU time, 107.5M memory peak. Jul 11 05:12:04.757370 containerd[1533]: time="2025-07-11T05:12:04.757323918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:04.758172 containerd[1533]: time="2025-07-11T05:12:04.758095643Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 11 05:12:04.758907 containerd[1533]: time="2025-07-11T05:12:04.758847862Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:04.762106 containerd[1533]: time="2025-07-11T05:12:04.762047039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:04.762722 containerd[1533]: time="2025-07-11T05:12:04.762571120Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.479083736s" Jul 11 05:12:04.762722 containerd[1533]: time="2025-07-11T05:12:04.762599351Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 11 05:12:04.764857 containerd[1533]: time="2025-07-11T05:12:04.764628475Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 11 05:12:05.690197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1124398539.mount: Deactivated successfully. Jul 11 05:12:05.912705 containerd[1533]: time="2025-07-11T05:12:05.912636260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:05.913879 containerd[1533]: time="2025-07-11T05:12:05.913836798Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 11 05:12:05.914535 containerd[1533]: time="2025-07-11T05:12:05.914501669Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:05.916361 containerd[1533]: time="2025-07-11T05:12:05.916329221Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:05.916795 containerd[1533]: time="2025-07-11T05:12:05.916757384Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.152092721s" Jul 11 05:12:05.916832 containerd[1533]: time="2025-07-11T05:12:05.916795081Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 11 05:12:05.917386 containerd[1533]: time="2025-07-11T05:12:05.917200417Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 11 05:12:06.559430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount708459041.mount: Deactivated successfully. Jul 11 05:12:07.535123 containerd[1533]: time="2025-07-11T05:12:07.534984401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:07.535816 containerd[1533]: time="2025-07-11T05:12:07.535736378Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 11 05:12:07.536990 containerd[1533]: time="2025-07-11T05:12:07.536412603Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:07.540574 containerd[1533]: time="2025-07-11T05:12:07.540519854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:07.541720 containerd[1533]: time="2025-07-11T05:12:07.541681821Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.624452317s" Jul 11 05:12:07.541720 containerd[1533]: time="2025-07-11T05:12:07.541717268Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 11 05:12:07.542242 containerd[1533]: time="2025-07-11T05:12:07.542183581Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 11 05:12:08.000472 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount47134999.mount: Deactivated successfully. Jul 11 05:12:08.004940 containerd[1533]: time="2025-07-11T05:12:08.004893810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:12:08.006022 containerd[1533]: time="2025-07-11T05:12:08.005988286Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 11 05:12:08.006967 containerd[1533]: time="2025-07-11T05:12:08.006919169Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:12:08.008795 containerd[1533]: time="2025-07-11T05:12:08.008741089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 11 05:12:08.009451 containerd[1533]: time="2025-07-11T05:12:08.009317659Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 467.101686ms" Jul 11 05:12:08.009451 containerd[1533]: time="2025-07-11T05:12:08.009348877Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 11 05:12:08.009890 containerd[1533]: time="2025-07-11T05:12:08.009847924Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 11 05:12:08.607869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1323257895.mount: Deactivated successfully. Jul 11 05:12:10.948539 containerd[1533]: time="2025-07-11T05:12:10.948467025Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:10.949712 containerd[1533]: time="2025-07-11T05:12:10.949657249Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 11 05:12:10.950677 containerd[1533]: time="2025-07-11T05:12:10.950630029Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:10.953546 containerd[1533]: time="2025-07-11T05:12:10.953512081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:10.954775 containerd[1533]: time="2025-07-11T05:12:10.954616658Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.944739288s" Jul 11 05:12:10.954775 containerd[1533]: time="2025-07-11T05:12:10.954656955Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 11 05:12:14.465885 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 11 05:12:14.467272 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:12:14.590269 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:12:14.605254 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 11 05:12:14.640137 kubelet[2206]: E0711 05:12:14.640039 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 11 05:12:14.642452 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 11 05:12:14.642574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 11 05:12:14.643089 systemd[1]: kubelet.service: Consumed 135ms CPU time, 106.9M memory peak. Jul 11 05:12:16.443455 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:12:16.443598 systemd[1]: kubelet.service: Consumed 135ms CPU time, 106.9M memory peak. Jul 11 05:12:16.445355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:12:16.463461 systemd[1]: Reload requested from client PID 2220 ('systemctl') (unit session-7.scope)... Jul 11 05:12:16.463484 systemd[1]: Reloading... Jul 11 05:12:16.546001 zram_generator::config[2267]: No configuration found. Jul 11 05:12:16.641921 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:12:16.725106 systemd[1]: Reloading finished in 261 ms. Jul 11 05:12:16.774351 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 11 05:12:16.774548 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 11 05:12:16.774916 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:12:16.774957 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95.1M memory peak. Jul 11 05:12:16.777250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:12:16.881608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:12:16.885390 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 05:12:16.919620 kubelet[2309]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:12:16.919620 kubelet[2309]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 05:12:16.919620 kubelet[2309]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:12:16.919917 kubelet[2309]: I0711 05:12:16.919685 2309 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 05:12:17.942145 kubelet[2309]: I0711 05:12:17.942093 2309 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 05:12:17.942145 kubelet[2309]: I0711 05:12:17.942130 2309 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 05:12:17.942476 kubelet[2309]: I0711 05:12:17.942399 2309 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 05:12:17.983353 kubelet[2309]: E0711 05:12:17.983308 2309 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.147:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:12:17.985189 kubelet[2309]: I0711 05:12:17.985116 2309 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 05:12:17.991578 kubelet[2309]: I0711 05:12:17.991561 2309 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 05:12:17.994341 kubelet[2309]: I0711 05:12:17.994322 2309 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 05:12:17.994636 kubelet[2309]: I0711 05:12:17.994608 2309 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 05:12:17.994846 kubelet[2309]: I0711 05:12:17.994693 2309 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 05:12:17.995064 kubelet[2309]: I0711 05:12:17.995049 2309 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 05:12:17.995131 kubelet[2309]: I0711 05:12:17.995122 2309 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 05:12:17.995357 kubelet[2309]: I0711 05:12:17.995342 2309 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:12:18.003607 kubelet[2309]: I0711 05:12:18.003587 2309 kubelet.go:446] "Attempting to sync node with API server" Jul 11 05:12:18.003688 kubelet[2309]: I0711 05:12:18.003678 2309 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 05:12:18.004664 kubelet[2309]: W0711 05:12:18.004620 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jul 11 05:12:18.004722 kubelet[2309]: E0711 05:12:18.004684 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.147:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:12:18.004835 kubelet[2309]: I0711 05:12:18.004818 2309 kubelet.go:352] "Adding apiserver pod source" Jul 11 05:12:18.004898 kubelet[2309]: I0711 05:12:18.004889 2309 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 05:12:18.007663 kubelet[2309]: W0711 05:12:18.007580 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jul 11 05:12:18.007663 kubelet[2309]: E0711 05:12:18.007626 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.147:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:12:18.017018 kubelet[2309]: I0711 05:12:18.016990 2309 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 05:12:18.018632 kubelet[2309]: I0711 05:12:18.018609 2309 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 05:12:18.018861 kubelet[2309]: W0711 05:12:18.018849 2309 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 11 05:12:18.020001 kubelet[2309]: I0711 05:12:18.019960 2309 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 05:12:18.020574 kubelet[2309]: I0711 05:12:18.020536 2309 server.go:1287] "Started kubelet" Jul 11 05:12:18.021462 kubelet[2309]: I0711 05:12:18.021409 2309 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 05:12:18.022186 kubelet[2309]: I0711 05:12:18.022163 2309 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 05:12:18.022319 kubelet[2309]: I0711 05:12:18.022267 2309 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 05:12:18.022536 kubelet[2309]: I0711 05:12:18.022511 2309 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 05:12:18.022789 kubelet[2309]: I0711 05:12:18.022762 2309 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 05:12:18.023883 kubelet[2309]: E0711 05:12:18.023345 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:12:18.023883 kubelet[2309]: I0711 05:12:18.023403 2309 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 05:12:18.023883 kubelet[2309]: I0711 05:12:18.023574 2309 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 05:12:18.023883 kubelet[2309]: I0711 05:12:18.023636 2309 reconciler.go:26] "Reconciler: start to sync state" Jul 11 05:12:18.024036 kubelet[2309]: W0711 05:12:18.023935 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jul 11 05:12:18.024036 kubelet[2309]: E0711 05:12:18.023991 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:12:18.024599 kubelet[2309]: I0711 05:12:18.022230 2309 server.go:479] "Adding debug handlers to kubelet server" Jul 11 05:12:18.025769 kubelet[2309]: I0711 05:12:18.025728 2309 factory.go:221] Registration of the systemd container factory successfully Jul 11 05:12:18.025832 kubelet[2309]: I0711 05:12:18.025821 2309 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 05:12:18.026801 kubelet[2309]: E0711 05:12:18.026749 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="200ms" Jul 11 05:12:18.030709 kubelet[2309]: E0711 05:12:18.030208 2309 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.147:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.147:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18511a61f3566247 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-11 05:12:18.020508231 +0000 UTC m=+1.132285712,LastTimestamp:2025-07-11 05:12:18.020508231 +0000 UTC m=+1.132285712,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 11 05:12:18.036015 kubelet[2309]: I0711 05:12:18.035991 2309 factory.go:221] Registration of the containerd container factory successfully Jul 11 05:12:18.037917 kubelet[2309]: I0711 05:12:18.037882 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 05:12:18.038959 kubelet[2309]: I0711 05:12:18.038939 2309 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 05:12:18.038959 kubelet[2309]: I0711 05:12:18.039093 2309 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 05:12:18.038959 kubelet[2309]: I0711 05:12:18.039119 2309 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 05:12:18.038959 kubelet[2309]: I0711 05:12:18.039127 2309 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 05:12:18.038959 kubelet[2309]: E0711 05:12:18.039165 2309 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 05:12:18.044594 kubelet[2309]: W0711 05:12:18.044569 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jul 11 05:12:18.044665 kubelet[2309]: E0711 05:12:18.044605 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.147:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:12:18.044981 kubelet[2309]: I0711 05:12:18.044945 2309 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 05:12:18.044981 kubelet[2309]: I0711 05:12:18.044961 2309 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 05:12:18.045052 kubelet[2309]: I0711 05:12:18.044990 2309 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:12:18.124050 kubelet[2309]: E0711 05:12:18.124001 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:12:18.140258 kubelet[2309]: E0711 05:12:18.140216 2309 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 11 05:12:18.224550 kubelet[2309]: E0711 05:12:18.224458 2309 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:12:18.228185 kubelet[2309]: E0711 05:12:18.228147 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="400ms" Jul 11 05:12:18.232057 kubelet[2309]: I0711 05:12:18.232031 2309 policy_none.go:49] "None policy: Start" Jul 11 05:12:18.232057 kubelet[2309]: I0711 05:12:18.232059 2309 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 05:12:18.232120 kubelet[2309]: I0711 05:12:18.232072 2309 state_mem.go:35] "Initializing new in-memory state store" Jul 11 05:12:18.237290 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 11 05:12:18.249278 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 11 05:12:18.252577 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 11 05:12:18.269690 kubelet[2309]: I0711 05:12:18.269668 2309 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 05:12:18.269857 kubelet[2309]: I0711 05:12:18.269842 2309 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 05:12:18.270002 kubelet[2309]: I0711 05:12:18.269859 2309 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 05:12:18.270409 kubelet[2309]: I0711 05:12:18.270386 2309 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 05:12:18.271159 kubelet[2309]: E0711 05:12:18.271139 2309 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 05:12:18.271210 kubelet[2309]: E0711 05:12:18.271180 2309 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 11 05:12:18.354214 systemd[1]: Created slice kubepods-burstable-pod4e7a440c80758e2abceec0d6f2e37b60.slice - libcontainer container kubepods-burstable-pod4e7a440c80758e2abceec0d6f2e37b60.slice. Jul 11 05:12:18.370807 kubelet[2309]: I0711 05:12:18.370779 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:12:18.373046 kubelet[2309]: E0711 05:12:18.373016 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jul 11 05:12:18.379447 kubelet[2309]: E0711 05:12:18.378570 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:12:18.380555 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 11 05:12:18.382359 kubelet[2309]: E0711 05:12:18.382324 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:12:18.384562 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 11 05:12:18.385948 kubelet[2309]: E0711 05:12:18.385922 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:12:18.425823 kubelet[2309]: I0711 05:12:18.425775 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:18.425823 kubelet[2309]: I0711 05:12:18.425820 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:18.425989 kubelet[2309]: I0711 05:12:18.425843 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 05:12:18.425989 kubelet[2309]: I0711 05:12:18.425868 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e7a440c80758e2abceec0d6f2e37b60-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e7a440c80758e2abceec0d6f2e37b60\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:18.425989 kubelet[2309]: I0711 05:12:18.425886 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e7a440c80758e2abceec0d6f2e37b60-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e7a440c80758e2abceec0d6f2e37b60\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:18.425989 kubelet[2309]: I0711 05:12:18.425901 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e7a440c80758e2abceec0d6f2e37b60-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4e7a440c80758e2abceec0d6f2e37b60\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:18.425989 kubelet[2309]: I0711 05:12:18.425915 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:18.426091 kubelet[2309]: I0711 05:12:18.425932 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:18.426091 kubelet[2309]: I0711 05:12:18.425956 2309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:18.575481 kubelet[2309]: I0711 05:12:18.574816 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:12:18.575481 kubelet[2309]: E0711 05:12:18.575103 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jul 11 05:12:18.628894 kubelet[2309]: E0711 05:12:18.628852 2309 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.147:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.147:6443: connect: connection refused" interval="800ms" Jul 11 05:12:18.681592 containerd[1533]: time="2025-07-11T05:12:18.681546118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4e7a440c80758e2abceec0d6f2e37b60,Namespace:kube-system,Attempt:0,}" Jul 11 05:12:18.682954 containerd[1533]: time="2025-07-11T05:12:18.682925500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 11 05:12:18.687986 containerd[1533]: time="2025-07-11T05:12:18.687886407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 11 05:12:18.711089 containerd[1533]: time="2025-07-11T05:12:18.711002264Z" level=info msg="connecting to shim 8103cbb286d3532711934369055117785271a6caa720c7fa9b6c74ae9f0e9ffb" address="unix:///run/containerd/s/437ed45419721206e13527f28b9b68b18f3003551b08dd675d60ccb5b9b70650" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:12:18.711743 containerd[1533]: time="2025-07-11T05:12:18.711704005Z" level=info msg="connecting to shim b1596696bc605d1385ad4784297d0515cad041fa979380d1a6230661a8ae947e" address="unix:///run/containerd/s/45b4ef9bc17640e034b70fd40a75a2bb5a6170f40e1ab6ac4aa8e348e60cf9b4" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:12:18.718865 containerd[1533]: time="2025-07-11T05:12:18.717643722Z" level=info msg="connecting to shim 7682547fc8b0b026e04b770bdcc00209ab5ec05ae021342f6ada870598cec29d" address="unix:///run/containerd/s/88985020646b8160ad745f9a3ed6e0b74abd1f092eaa7496adbd994e7cc4ace5" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:12:18.745124 systemd[1]: Started cri-containerd-8103cbb286d3532711934369055117785271a6caa720c7fa9b6c74ae9f0e9ffb.scope - libcontainer container 8103cbb286d3532711934369055117785271a6caa720c7fa9b6c74ae9f0e9ffb. Jul 11 05:12:18.746107 systemd[1]: Started cri-containerd-b1596696bc605d1385ad4784297d0515cad041fa979380d1a6230661a8ae947e.scope - libcontainer container b1596696bc605d1385ad4784297d0515cad041fa979380d1a6230661a8ae947e. Jul 11 05:12:18.749960 systemd[1]: Started cri-containerd-7682547fc8b0b026e04b770bdcc00209ab5ec05ae021342f6ada870598cec29d.scope - libcontainer container 7682547fc8b0b026e04b770bdcc00209ab5ec05ae021342f6ada870598cec29d. Jul 11 05:12:18.782890 containerd[1533]: time="2025-07-11T05:12:18.782848863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:4e7a440c80758e2abceec0d6f2e37b60,Namespace:kube-system,Attempt:0,} returns sandbox id \"8103cbb286d3532711934369055117785271a6caa720c7fa9b6c74ae9f0e9ffb\"" Jul 11 05:12:18.785688 containerd[1533]: time="2025-07-11T05:12:18.785648741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1596696bc605d1385ad4784297d0515cad041fa979380d1a6230661a8ae947e\"" Jul 11 05:12:18.788575 containerd[1533]: time="2025-07-11T05:12:18.788543097Z" level=info msg="CreateContainer within sandbox \"b1596696bc605d1385ad4784297d0515cad041fa979380d1a6230661a8ae947e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 11 05:12:18.788628 containerd[1533]: time="2025-07-11T05:12:18.788597982Z" level=info msg="CreateContainer within sandbox \"8103cbb286d3532711934369055117785271a6caa720c7fa9b6c74ae9f0e9ffb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 11 05:12:18.796214 containerd[1533]: time="2025-07-11T05:12:18.796182501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7682547fc8b0b026e04b770bdcc00209ab5ec05ae021342f6ada870598cec29d\"" Jul 11 05:12:18.798522 containerd[1533]: time="2025-07-11T05:12:18.798488731Z" level=info msg="CreateContainer within sandbox \"7682547fc8b0b026e04b770bdcc00209ab5ec05ae021342f6ada870598cec29d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 11 05:12:18.799523 containerd[1533]: time="2025-07-11T05:12:18.799493923Z" level=info msg="Container 793c66815f426b6bfa4d049111bb101cfcfc64aa3d01d129a28dcf9a3f31c95c: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:12:18.800448 containerd[1533]: time="2025-07-11T05:12:18.800114837Z" level=info msg="Container afeeec8699988db6b7a7c94ed113bfd1fb2644244d238e02bb003fa4d4e7ed59: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:12:18.808009 containerd[1533]: time="2025-07-11T05:12:18.807958010Z" level=info msg="CreateContainer within sandbox \"b1596696bc605d1385ad4784297d0515cad041fa979380d1a6230661a8ae947e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"afeeec8699988db6b7a7c94ed113bfd1fb2644244d238e02bb003fa4d4e7ed59\"" Jul 11 05:12:18.808265 containerd[1533]: time="2025-07-11T05:12:18.808215463Z" level=info msg="Container 798bdc1928f014df71bb5db5bf434d7444ed08cbe726fa524d7de23b55d5f991: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:12:18.808585 containerd[1533]: time="2025-07-11T05:12:18.808558827Z" level=info msg="StartContainer for \"afeeec8699988db6b7a7c94ed113bfd1fb2644244d238e02bb003fa4d4e7ed59\"" Jul 11 05:12:18.809708 containerd[1533]: time="2025-07-11T05:12:18.809669227Z" level=info msg="connecting to shim afeeec8699988db6b7a7c94ed113bfd1fb2644244d238e02bb003fa4d4e7ed59" address="unix:///run/containerd/s/45b4ef9bc17640e034b70fd40a75a2bb5a6170f40e1ab6ac4aa8e348e60cf9b4" protocol=ttrpc version=3 Jul 11 05:12:18.810529 containerd[1533]: time="2025-07-11T05:12:18.810498073Z" level=info msg="CreateContainer within sandbox \"8103cbb286d3532711934369055117785271a6caa720c7fa9b6c74ae9f0e9ffb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"793c66815f426b6bfa4d049111bb101cfcfc64aa3d01d129a28dcf9a3f31c95c\"" Jul 11 05:12:18.811992 containerd[1533]: time="2025-07-11T05:12:18.811102533Z" level=info msg="StartContainer for \"793c66815f426b6bfa4d049111bb101cfcfc64aa3d01d129a28dcf9a3f31c95c\"" Jul 11 05:12:18.812405 containerd[1533]: time="2025-07-11T05:12:18.812381512Z" level=info msg="connecting to shim 793c66815f426b6bfa4d049111bb101cfcfc64aa3d01d129a28dcf9a3f31c95c" address="unix:///run/containerd/s/437ed45419721206e13527f28b9b68b18f3003551b08dd675d60ccb5b9b70650" protocol=ttrpc version=3 Jul 11 05:12:18.814355 containerd[1533]: time="2025-07-11T05:12:18.814324320Z" level=info msg="CreateContainer within sandbox \"7682547fc8b0b026e04b770bdcc00209ab5ec05ae021342f6ada870598cec29d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"798bdc1928f014df71bb5db5bf434d7444ed08cbe726fa524d7de23b55d5f991\"" Jul 11 05:12:18.814724 containerd[1533]: time="2025-07-11T05:12:18.814685739Z" level=info msg="StartContainer for \"798bdc1928f014df71bb5db5bf434d7444ed08cbe726fa524d7de23b55d5f991\"" Jul 11 05:12:18.815795 containerd[1533]: time="2025-07-11T05:12:18.815763352Z" level=info msg="connecting to shim 798bdc1928f014df71bb5db5bf434d7444ed08cbe726fa524d7de23b55d5f991" address="unix:///run/containerd/s/88985020646b8160ad745f9a3ed6e0b74abd1f092eaa7496adbd994e7cc4ace5" protocol=ttrpc version=3 Jul 11 05:12:18.829101 systemd[1]: Started cri-containerd-afeeec8699988db6b7a7c94ed113bfd1fb2644244d238e02bb003fa4d4e7ed59.scope - libcontainer container afeeec8699988db6b7a7c94ed113bfd1fb2644244d238e02bb003fa4d4e7ed59. Jul 11 05:12:18.832921 systemd[1]: Started cri-containerd-793c66815f426b6bfa4d049111bb101cfcfc64aa3d01d129a28dcf9a3f31c95c.scope - libcontainer container 793c66815f426b6bfa4d049111bb101cfcfc64aa3d01d129a28dcf9a3f31c95c. Jul 11 05:12:18.834272 systemd[1]: Started cri-containerd-798bdc1928f014df71bb5db5bf434d7444ed08cbe726fa524d7de23b55d5f991.scope - libcontainer container 798bdc1928f014df71bb5db5bf434d7444ed08cbe726fa524d7de23b55d5f991. Jul 11 05:12:18.885838 containerd[1533]: time="2025-07-11T05:12:18.884622557Z" level=info msg="StartContainer for \"798bdc1928f014df71bb5db5bf434d7444ed08cbe726fa524d7de23b55d5f991\" returns successfully" Jul 11 05:12:18.885838 containerd[1533]: time="2025-07-11T05:12:18.884717396Z" level=info msg="StartContainer for \"793c66815f426b6bfa4d049111bb101cfcfc64aa3d01d129a28dcf9a3f31c95c\" returns successfully" Jul 11 05:12:18.894392 kubelet[2309]: W0711 05:12:18.894283 2309 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.147:6443: connect: connection refused Jul 11 05:12:18.894518 kubelet[2309]: E0711 05:12:18.894365 2309 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.147:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.147:6443: connect: connection refused" logger="UnhandledError" Jul 11 05:12:18.899986 containerd[1533]: time="2025-07-11T05:12:18.898562738Z" level=info msg="StartContainer for \"afeeec8699988db6b7a7c94ed113bfd1fb2644244d238e02bb003fa4d4e7ed59\" returns successfully" Jul 11 05:12:18.978183 kubelet[2309]: I0711 05:12:18.978151 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:12:18.978536 kubelet[2309]: E0711 05:12:18.978497 2309 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.147:6443/api/v1/nodes\": dial tcp 10.0.0.147:6443: connect: connection refused" node="localhost" Jul 11 05:12:19.054350 kubelet[2309]: E0711 05:12:19.054277 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:12:19.056443 kubelet[2309]: E0711 05:12:19.056274 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:12:19.059228 kubelet[2309]: E0711 05:12:19.059205 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:12:19.779948 kubelet[2309]: I0711 05:12:19.779901 2309 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:12:20.062556 kubelet[2309]: E0711 05:12:20.062463 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:12:20.063395 kubelet[2309]: E0711 05:12:20.063359 2309 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 11 05:12:21.363679 kubelet[2309]: E0711 05:12:21.363623 2309 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 11 05:12:21.524844 kubelet[2309]: I0711 05:12:21.524802 2309 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 05:12:21.524844 kubelet[2309]: E0711 05:12:21.524843 2309 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 11 05:12:21.526600 kubelet[2309]: I0711 05:12:21.526549 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 05:12:21.535848 kubelet[2309]: E0711 05:12:21.535822 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 11 05:12:21.536525 kubelet[2309]: I0711 05:12:21.535900 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:21.537732 kubelet[2309]: E0711 05:12:21.537692 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:21.537732 kubelet[2309]: I0711 05:12:21.537715 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:21.539516 kubelet[2309]: E0711 05:12:21.539466 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:22.007742 kubelet[2309]: I0711 05:12:22.007708 2309 apiserver.go:52] "Watching apiserver" Jul 11 05:12:22.024257 kubelet[2309]: I0711 05:12:22.024232 2309 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 05:12:22.103371 kubelet[2309]: I0711 05:12:22.103345 2309 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:22.105193 kubelet[2309]: E0711 05:12:22.105141 2309 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:23.331791 systemd[1]: Reload requested from client PID 2586 ('systemctl') (unit session-7.scope)... Jul 11 05:12:23.331807 systemd[1]: Reloading... Jul 11 05:12:23.394007 zram_generator::config[2632]: No configuration found. Jul 11 05:12:23.459755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 11 05:12:23.554843 systemd[1]: Reloading finished in 222 ms. Jul 11 05:12:23.589620 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:12:23.600871 systemd[1]: kubelet.service: Deactivated successfully. Jul 11 05:12:23.602057 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:12:23.602126 systemd[1]: kubelet.service: Consumed 1.590s CPU time, 128.8M memory peak. Jul 11 05:12:23.603790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 11 05:12:23.732895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 11 05:12:23.738471 (kubelet)[2671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 11 05:12:23.781114 kubelet[2671]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:12:23.781114 kubelet[2671]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 11 05:12:23.781114 kubelet[2671]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 11 05:12:23.782117 kubelet[2671]: I0711 05:12:23.781158 2671 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 11 05:12:23.787996 kubelet[2671]: I0711 05:12:23.787945 2671 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 11 05:12:23.788123 kubelet[2671]: I0711 05:12:23.788110 2671 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 11 05:12:23.788424 kubelet[2671]: I0711 05:12:23.788405 2671 server.go:954] "Client rotation is on, will bootstrap in background" Jul 11 05:12:23.789729 kubelet[2671]: I0711 05:12:23.789703 2671 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 11 05:12:23.792812 kubelet[2671]: I0711 05:12:23.792776 2671 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 11 05:12:23.796817 kubelet[2671]: I0711 05:12:23.796798 2671 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 11 05:12:23.800006 kubelet[2671]: I0711 05:12:23.799699 2671 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 11 05:12:23.800006 kubelet[2671]: I0711 05:12:23.799891 2671 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 11 05:12:23.800549 kubelet[2671]: I0711 05:12:23.799914 2671 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 11 05:12:23.800549 kubelet[2671]: I0711 05:12:23.800188 2671 topology_manager.go:138] "Creating topology manager with none policy" Jul 11 05:12:23.800549 kubelet[2671]: I0711 05:12:23.800198 2671 container_manager_linux.go:304] "Creating device plugin manager" Jul 11 05:12:23.800549 kubelet[2671]: I0711 05:12:23.800242 2671 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:12:23.800549 kubelet[2671]: I0711 05:12:23.800367 2671 kubelet.go:446] "Attempting to sync node with API server" Jul 11 05:12:23.800745 kubelet[2671]: I0711 05:12:23.800379 2671 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 11 05:12:23.800745 kubelet[2671]: I0711 05:12:23.800401 2671 kubelet.go:352] "Adding apiserver pod source" Jul 11 05:12:23.800745 kubelet[2671]: I0711 05:12:23.800413 2671 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 11 05:12:23.801364 kubelet[2671]: I0711 05:12:23.801342 2671 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 11 05:12:23.801883 kubelet[2671]: I0711 05:12:23.801861 2671 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 11 05:12:23.802383 kubelet[2671]: I0711 05:12:23.802348 2671 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 11 05:12:23.802473 kubelet[2671]: I0711 05:12:23.802463 2671 server.go:1287] "Started kubelet" Jul 11 05:12:23.804463 kubelet[2671]: I0711 05:12:23.804444 2671 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 11 05:12:23.804937 kubelet[2671]: I0711 05:12:23.804908 2671 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 11 05:12:23.806042 kubelet[2671]: I0711 05:12:23.804961 2671 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 11 05:12:23.806255 kubelet[2671]: I0711 05:12:23.806225 2671 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 11 05:12:23.808194 kubelet[2671]: I0711 05:12:23.808155 2671 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 11 05:12:23.808814 kubelet[2671]: I0711 05:12:23.808784 2671 server.go:479] "Adding debug handlers to kubelet server" Jul 11 05:12:23.809695 kubelet[2671]: E0711 05:12:23.809655 2671 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 11 05:12:23.809763 kubelet[2671]: I0711 05:12:23.809708 2671 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 11 05:12:23.809882 kubelet[2671]: I0711 05:12:23.809861 2671 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 11 05:12:23.810115 kubelet[2671]: I0711 05:12:23.810091 2671 reconciler.go:26] "Reconciler: start to sync state" Jul 11 05:12:23.815975 kubelet[2671]: I0711 05:12:23.814497 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 11 05:12:23.815975 kubelet[2671]: I0711 05:12:23.815434 2671 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 11 05:12:23.815975 kubelet[2671]: I0711 05:12:23.815454 2671 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 11 05:12:23.815975 kubelet[2671]: I0711 05:12:23.815472 2671 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 11 05:12:23.815975 kubelet[2671]: I0711 05:12:23.815477 2671 kubelet.go:2382] "Starting kubelet main sync loop" Jul 11 05:12:23.815975 kubelet[2671]: E0711 05:12:23.815517 2671 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 11 05:12:23.818105 kubelet[2671]: I0711 05:12:23.818068 2671 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 11 05:12:23.826544 kubelet[2671]: E0711 05:12:23.826518 2671 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 11 05:12:23.828710 kubelet[2671]: I0711 05:12:23.828685 2671 factory.go:221] Registration of the containerd container factory successfully Jul 11 05:12:23.828814 kubelet[2671]: I0711 05:12:23.828803 2671 factory.go:221] Registration of the systemd container factory successfully Jul 11 05:12:23.871365 kubelet[2671]: I0711 05:12:23.871273 2671 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 11 05:12:23.871365 kubelet[2671]: I0711 05:12:23.871305 2671 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 11 05:12:23.871365 kubelet[2671]: I0711 05:12:23.871327 2671 state_mem.go:36] "Initialized new in-memory state store" Jul 11 05:12:23.871502 kubelet[2671]: I0711 05:12:23.871487 2671 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 11 05:12:23.871526 kubelet[2671]: I0711 05:12:23.871498 2671 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 11 05:12:23.871526 kubelet[2671]: I0711 05:12:23.871516 2671 policy_none.go:49] "None policy: Start" Jul 11 05:12:23.871526 kubelet[2671]: I0711 05:12:23.871524 2671 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 11 05:12:23.871586 kubelet[2671]: I0711 05:12:23.871533 2671 state_mem.go:35] "Initializing new in-memory state store" Jul 11 05:12:23.871653 kubelet[2671]: I0711 05:12:23.871636 2671 state_mem.go:75] "Updated machine memory state" Jul 11 05:12:23.875356 kubelet[2671]: I0711 05:12:23.875317 2671 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 11 05:12:23.875500 kubelet[2671]: I0711 05:12:23.875473 2671 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 11 05:12:23.875535 kubelet[2671]: I0711 05:12:23.875490 2671 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 11 05:12:23.876072 kubelet[2671]: I0711 05:12:23.876051 2671 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 11 05:12:23.876991 kubelet[2671]: E0711 05:12:23.876705 2671 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 11 05:12:23.916149 kubelet[2671]: I0711 05:12:23.916094 2671 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 05:12:23.916278 kubelet[2671]: I0711 05:12:23.916235 2671 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:23.916884 kubelet[2671]: I0711 05:12:23.916774 2671 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:23.978416 kubelet[2671]: I0711 05:12:23.978392 2671 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 11 05:12:23.985399 kubelet[2671]: I0711 05:12:23.985356 2671 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 11 05:12:23.985724 kubelet[2671]: I0711 05:12:23.985528 2671 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 11 05:12:24.011168 kubelet[2671]: I0711 05:12:24.011114 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:24.011168 kubelet[2671]: I0711 05:12:24.011148 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:24.011168 kubelet[2671]: I0711 05:12:24.011167 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4e7a440c80758e2abceec0d6f2e37b60-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"4e7a440c80758e2abceec0d6f2e37b60\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:24.011298 kubelet[2671]: I0711 05:12:24.011185 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:24.011298 kubelet[2671]: I0711 05:12:24.011225 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:24.011298 kubelet[2671]: I0711 05:12:24.011258 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 11 05:12:24.011298 kubelet[2671]: I0711 05:12:24.011279 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 11 05:12:24.011298 kubelet[2671]: I0711 05:12:24.011295 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4e7a440c80758e2abceec0d6f2e37b60-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e7a440c80758e2abceec0d6f2e37b60\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:24.011421 kubelet[2671]: I0711 05:12:24.011309 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4e7a440c80758e2abceec0d6f2e37b60-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"4e7a440c80758e2abceec0d6f2e37b60\") " pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:24.801493 kubelet[2671]: I0711 05:12:24.801458 2671 apiserver.go:52] "Watching apiserver" Jul 11 05:12:24.810468 kubelet[2671]: I0711 05:12:24.810405 2671 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 11 05:12:24.848636 kubelet[2671]: I0711 05:12:24.847817 2671 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:24.848636 kubelet[2671]: I0711 05:12:24.847925 2671 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 11 05:12:24.856518 kubelet[2671]: E0711 05:12:24.856481 2671 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 11 05:12:24.858302 kubelet[2671]: E0711 05:12:24.858268 2671 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 11 05:12:24.868655 kubelet[2671]: I0711 05:12:24.868524 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.8684914259999998 podStartE2EDuration="1.868491426s" podCreationTimestamp="2025-07-11 05:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:12:24.868481341 +0000 UTC m=+1.126930540" watchObservedRunningTime="2025-07-11 05:12:24.868491426 +0000 UTC m=+1.126940665" Jul 11 05:12:24.884927 kubelet[2671]: I0711 05:12:24.884843 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.884820401 podStartE2EDuration="1.884820401s" podCreationTimestamp="2025-07-11 05:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:12:24.87620502 +0000 UTC m=+1.134654259" watchObservedRunningTime="2025-07-11 05:12:24.884820401 +0000 UTC m=+1.143269640" Jul 11 05:12:24.893731 kubelet[2671]: I0711 05:12:24.893674 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8936580379999999 podStartE2EDuration="1.893658038s" podCreationTimestamp="2025-07-11 05:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:12:24.885440508 +0000 UTC m=+1.143889747" watchObservedRunningTime="2025-07-11 05:12:24.893658038 +0000 UTC m=+1.152107277" Jul 11 05:12:29.204581 kubelet[2671]: I0711 05:12:29.204547 2671 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 11 05:12:29.204904 containerd[1533]: time="2025-07-11T05:12:29.204838605Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 11 05:12:29.205114 kubelet[2671]: I0711 05:12:29.205044 2671 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 11 05:12:30.170933 systemd[1]: Created slice kubepods-besteffort-pod175ac747_89b2_46e3_8b39_ef2ff4b32816.slice - libcontainer container kubepods-besteffort-pod175ac747_89b2_46e3_8b39_ef2ff4b32816.slice. Jul 11 05:12:30.254628 kubelet[2671]: I0711 05:12:30.254563 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/175ac747-89b2-46e3-8b39-ef2ff4b32816-xtables-lock\") pod \"kube-proxy-rdlwx\" (UID: \"175ac747-89b2-46e3-8b39-ef2ff4b32816\") " pod="kube-system/kube-proxy-rdlwx" Jul 11 05:12:30.254628 kubelet[2671]: I0711 05:12:30.254608 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/175ac747-89b2-46e3-8b39-ef2ff4b32816-kube-proxy\") pod \"kube-proxy-rdlwx\" (UID: \"175ac747-89b2-46e3-8b39-ef2ff4b32816\") " pod="kube-system/kube-proxy-rdlwx" Jul 11 05:12:30.254628 kubelet[2671]: I0711 05:12:30.254631 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/175ac747-89b2-46e3-8b39-ef2ff4b32816-lib-modules\") pod \"kube-proxy-rdlwx\" (UID: \"175ac747-89b2-46e3-8b39-ef2ff4b32816\") " pod="kube-system/kube-proxy-rdlwx" Jul 11 05:12:30.255037 kubelet[2671]: I0711 05:12:30.254652 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spgn8\" (UniqueName: \"kubernetes.io/projected/175ac747-89b2-46e3-8b39-ef2ff4b32816-kube-api-access-spgn8\") pod \"kube-proxy-rdlwx\" (UID: \"175ac747-89b2-46e3-8b39-ef2ff4b32816\") " pod="kube-system/kube-proxy-rdlwx" Jul 11 05:12:30.292445 systemd[1]: Created slice kubepods-besteffort-podeb0d9429_8e59_481c_886d_062ab7622541.slice - libcontainer container kubepods-besteffort-podeb0d9429_8e59_481c_886d_062ab7622541.slice. Jul 11 05:12:30.355176 kubelet[2671]: I0711 05:12:30.355117 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eb0d9429-8e59-481c-886d-062ab7622541-var-lib-calico\") pod \"tigera-operator-747864d56d-xr224\" (UID: \"eb0d9429-8e59-481c-886d-062ab7622541\") " pod="tigera-operator/tigera-operator-747864d56d-xr224" Jul 11 05:12:30.355176 kubelet[2671]: I0711 05:12:30.355176 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kklfk\" (UniqueName: \"kubernetes.io/projected/eb0d9429-8e59-481c-886d-062ab7622541-kube-api-access-kklfk\") pod \"tigera-operator-747864d56d-xr224\" (UID: \"eb0d9429-8e59-481c-886d-062ab7622541\") " pod="tigera-operator/tigera-operator-747864d56d-xr224" Jul 11 05:12:30.480900 containerd[1533]: time="2025-07-11T05:12:30.480803164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdlwx,Uid:175ac747-89b2-46e3-8b39-ef2ff4b32816,Namespace:kube-system,Attempt:0,}" Jul 11 05:12:30.494882 containerd[1533]: time="2025-07-11T05:12:30.494832231Z" level=info msg="connecting to shim 3561dc2903359c395b3d884e56aaed6e40b67e2d4109ae3749a411ab0ff05a8a" address="unix:///run/containerd/s/df0710be1bfc888a9b6cff754d66a24715b90c62cd3e628f12cfa6d9f079042f" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:12:30.522201 systemd[1]: Started cri-containerd-3561dc2903359c395b3d884e56aaed6e40b67e2d4109ae3749a411ab0ff05a8a.scope - libcontainer container 3561dc2903359c395b3d884e56aaed6e40b67e2d4109ae3749a411ab0ff05a8a. Jul 11 05:12:30.542943 containerd[1533]: time="2025-07-11T05:12:30.542902032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rdlwx,Uid:175ac747-89b2-46e3-8b39-ef2ff4b32816,Namespace:kube-system,Attempt:0,} returns sandbox id \"3561dc2903359c395b3d884e56aaed6e40b67e2d4109ae3749a411ab0ff05a8a\"" Jul 11 05:12:30.546456 containerd[1533]: time="2025-07-11T05:12:30.546249099Z" level=info msg="CreateContainer within sandbox \"3561dc2903359c395b3d884e56aaed6e40b67e2d4109ae3749a411ab0ff05a8a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 11 05:12:30.558374 containerd[1533]: time="2025-07-11T05:12:30.558273952Z" level=info msg="Container f265866445d346a5ad74b13a2d2b20b56a45f910f597dd6fd68230f395718d00: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:12:30.565634 containerd[1533]: time="2025-07-11T05:12:30.565591319Z" level=info msg="CreateContainer within sandbox \"3561dc2903359c395b3d884e56aaed6e40b67e2d4109ae3749a411ab0ff05a8a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f265866445d346a5ad74b13a2d2b20b56a45f910f597dd6fd68230f395718d00\"" Jul 11 05:12:30.566167 containerd[1533]: time="2025-07-11T05:12:30.566136606Z" level=info msg="StartContainer for \"f265866445d346a5ad74b13a2d2b20b56a45f910f597dd6fd68230f395718d00\"" Jul 11 05:12:30.567612 containerd[1533]: time="2025-07-11T05:12:30.567579729Z" level=info msg="connecting to shim f265866445d346a5ad74b13a2d2b20b56a45f910f597dd6fd68230f395718d00" address="unix:///run/containerd/s/df0710be1bfc888a9b6cff754d66a24715b90c62cd3e628f12cfa6d9f079042f" protocol=ttrpc version=3 Jul 11 05:12:30.586144 systemd[1]: Started cri-containerd-f265866445d346a5ad74b13a2d2b20b56a45f910f597dd6fd68230f395718d00.scope - libcontainer container f265866445d346a5ad74b13a2d2b20b56a45f910f597dd6fd68230f395718d00. Jul 11 05:12:30.595435 containerd[1533]: time="2025-07-11T05:12:30.595386868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-xr224,Uid:eb0d9429-8e59-481c-886d-062ab7622541,Namespace:tigera-operator,Attempt:0,}" Jul 11 05:12:30.612840 containerd[1533]: time="2025-07-11T05:12:30.612795933Z" level=info msg="connecting to shim 37c5552de2396455f79f3de2a6a149db2d5fbc795640ee3c8eeb3e88acfa34da" address="unix:///run/containerd/s/77e3d39510bd02c2ca042985fb586a7c7b97d1b61b762bd8545cae3799deee2a" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:12:30.624169 containerd[1533]: time="2025-07-11T05:12:30.624082319Z" level=info msg="StartContainer for \"f265866445d346a5ad74b13a2d2b20b56a45f910f597dd6fd68230f395718d00\" returns successfully" Jul 11 05:12:30.647149 systemd[1]: Started cri-containerd-37c5552de2396455f79f3de2a6a149db2d5fbc795640ee3c8eeb3e88acfa34da.scope - libcontainer container 37c5552de2396455f79f3de2a6a149db2d5fbc795640ee3c8eeb3e88acfa34da. Jul 11 05:12:30.676785 containerd[1533]: time="2025-07-11T05:12:30.676745370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-xr224,Uid:eb0d9429-8e59-481c-886d-062ab7622541,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"37c5552de2396455f79f3de2a6a149db2d5fbc795640ee3c8eeb3e88acfa34da\"" Jul 11 05:12:30.679948 containerd[1533]: time="2025-07-11T05:12:30.679911302Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 11 05:12:30.867498 kubelet[2671]: I0711 05:12:30.867367 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rdlwx" podStartSLOduration=0.867350097 podStartE2EDuration="867.350097ms" podCreationTimestamp="2025-07-11 05:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:12:30.866812132 +0000 UTC m=+7.125261371" watchObservedRunningTime="2025-07-11 05:12:30.867350097 +0000 UTC m=+7.125799336" Jul 11 05:12:32.120278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091575980.mount: Deactivated successfully. Jul 11 05:12:32.501872 containerd[1533]: time="2025-07-11T05:12:32.501738673Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:32.503308 containerd[1533]: time="2025-07-11T05:12:32.503209078Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=22150610" Jul 11 05:12:32.504190 containerd[1533]: time="2025-07-11T05:12:32.504121530Z" level=info msg="ImageCreate event name:\"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:32.506310 containerd[1533]: time="2025-07-11T05:12:32.506236873Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:32.507013 containerd[1533]: time="2025-07-11T05:12:32.506964473Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"22146605\" in 1.827018721s" Jul 11 05:12:32.507013 containerd[1533]: time="2025-07-11T05:12:32.507011246Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:7f8a5b1dba618e907d5f7804e42b3bd7cd5766bc3b0a66da25ff2c687e356bb0\"" Jul 11 05:12:32.508999 containerd[1533]: time="2025-07-11T05:12:32.508783935Z" level=info msg="CreateContainer within sandbox \"37c5552de2396455f79f3de2a6a149db2d5fbc795640ee3c8eeb3e88acfa34da\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 11 05:12:32.517241 containerd[1533]: time="2025-07-11T05:12:32.517200455Z" level=info msg="Container 2346569bc7f775b679c11a9826f1f733f89327afaee8a29acf814574d6e2f5f5: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:12:32.526166 containerd[1533]: time="2025-07-11T05:12:32.526120394Z" level=info msg="CreateContainer within sandbox \"37c5552de2396455f79f3de2a6a149db2d5fbc795640ee3c8eeb3e88acfa34da\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2346569bc7f775b679c11a9826f1f733f89327afaee8a29acf814574d6e2f5f5\"" Jul 11 05:12:32.526810 containerd[1533]: time="2025-07-11T05:12:32.526705155Z" level=info msg="StartContainer for \"2346569bc7f775b679c11a9826f1f733f89327afaee8a29acf814574d6e2f5f5\"" Jul 11 05:12:32.527993 containerd[1533]: time="2025-07-11T05:12:32.527936935Z" level=info msg="connecting to shim 2346569bc7f775b679c11a9826f1f733f89327afaee8a29acf814574d6e2f5f5" address="unix:///run/containerd/s/77e3d39510bd02c2ca042985fb586a7c7b97d1b61b762bd8545cae3799deee2a" protocol=ttrpc version=3 Jul 11 05:12:32.547156 systemd[1]: Started cri-containerd-2346569bc7f775b679c11a9826f1f733f89327afaee8a29acf814574d6e2f5f5.scope - libcontainer container 2346569bc7f775b679c11a9826f1f733f89327afaee8a29acf814574d6e2f5f5. Jul 11 05:12:32.577324 containerd[1533]: time="2025-07-11T05:12:32.577275816Z" level=info msg="StartContainer for \"2346569bc7f775b679c11a9826f1f733f89327afaee8a29acf814574d6e2f5f5\" returns successfully" Jul 11 05:12:32.870891 kubelet[2671]: I0711 05:12:32.870747 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-xr224" podStartSLOduration=1.041712359 podStartE2EDuration="2.87072303s" podCreationTimestamp="2025-07-11 05:12:30 +0000 UTC" firstStartedPulling="2025-07-11 05:12:30.678641632 +0000 UTC m=+6.937090871" lastFinishedPulling="2025-07-11 05:12:32.507652303 +0000 UTC m=+8.766101542" observedRunningTime="2025-07-11 05:12:32.870611039 +0000 UTC m=+9.129060238" watchObservedRunningTime="2025-07-11 05:12:32.87072303 +0000 UTC m=+9.129172269" Jul 11 05:12:36.533585 update_engine[1509]: I20250711 05:12:36.533006 1509 update_attempter.cc:509] Updating boot flags... Jul 11 05:12:37.939797 sudo[1741]: pam_unix(sudo:session): session closed for user root Jul 11 05:12:37.954999 sshd[1740]: Connection closed by 10.0.0.1 port 35396 Jul 11 05:12:37.955295 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Jul 11 05:12:37.959219 systemd[1]: sshd@6-10.0.0.147:22-10.0.0.1:35396.service: Deactivated successfully. Jul 11 05:12:37.961919 systemd[1]: session-7.scope: Deactivated successfully. Jul 11 05:12:37.962156 systemd[1]: session-7.scope: Consumed 7.529s CPU time, 223.3M memory peak. Jul 11 05:12:37.963099 systemd-logind[1508]: Session 7 logged out. Waiting for processes to exit. Jul 11 05:12:37.965619 systemd-logind[1508]: Removed session 7. Jul 11 05:12:41.515285 systemd[1]: Created slice kubepods-besteffort-pod8ffb0290_4f45_4fda_8a24_974845d40a5b.slice - libcontainer container kubepods-besteffort-pod8ffb0290_4f45_4fda_8a24_974845d40a5b.slice. Jul 11 05:12:41.532069 kubelet[2671]: I0711 05:12:41.532028 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8ffb0290-4f45-4fda-8a24-974845d40a5b-tigera-ca-bundle\") pod \"calico-typha-7c6956c55c-kbf9j\" (UID: \"8ffb0290-4f45-4fda-8a24-974845d40a5b\") " pod="calico-system/calico-typha-7c6956c55c-kbf9j" Jul 11 05:12:41.532069 kubelet[2671]: I0711 05:12:41.532076 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8ffb0290-4f45-4fda-8a24-974845d40a5b-typha-certs\") pod \"calico-typha-7c6956c55c-kbf9j\" (UID: \"8ffb0290-4f45-4fda-8a24-974845d40a5b\") " pod="calico-system/calico-typha-7c6956c55c-kbf9j" Jul 11 05:12:41.532390 kubelet[2671]: I0711 05:12:41.532097 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtnzg\" (UniqueName: \"kubernetes.io/projected/8ffb0290-4f45-4fda-8a24-974845d40a5b-kube-api-access-qtnzg\") pod \"calico-typha-7c6956c55c-kbf9j\" (UID: \"8ffb0290-4f45-4fda-8a24-974845d40a5b\") " pod="calico-system/calico-typha-7c6956c55c-kbf9j" Jul 11 05:12:41.791659 systemd[1]: Created slice kubepods-besteffort-pod9037642c_15b5_4b73_8ef3_208cf2ded088.slice - libcontainer container kubepods-besteffort-pod9037642c_15b5_4b73_8ef3_208cf2ded088.slice. Jul 11 05:12:41.819322 containerd[1533]: time="2025-07-11T05:12:41.819169229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c6956c55c-kbf9j,Uid:8ffb0290-4f45-4fda-8a24-974845d40a5b,Namespace:calico-system,Attempt:0,}" Jul 11 05:12:41.833979 kubelet[2671]: I0711 05:12:41.833935 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9037642c-15b5-4b73-8ef3-208cf2ded088-var-run-calico\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.834083 kubelet[2671]: I0711 05:12:41.834016 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9037642c-15b5-4b73-8ef3-208cf2ded088-cni-log-dir\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.834083 kubelet[2671]: I0711 05:12:41.834037 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9037642c-15b5-4b73-8ef3-208cf2ded088-tigera-ca-bundle\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.834083 kubelet[2671]: I0711 05:12:41.834063 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9037642c-15b5-4b73-8ef3-208cf2ded088-node-certs\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.834083 kubelet[2671]: I0711 05:12:41.834080 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9037642c-15b5-4b73-8ef3-208cf2ded088-xtables-lock\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.834174 kubelet[2671]: I0711 05:12:41.834097 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9037642c-15b5-4b73-8ef3-208cf2ded088-policysync\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.834174 kubelet[2671]: I0711 05:12:41.834111 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9037642c-15b5-4b73-8ef3-208cf2ded088-cni-bin-dir\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.834174 kubelet[2671]: I0711 05:12:41.834126 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9037642c-15b5-4b73-8ef3-208cf2ded088-cni-net-dir\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.834174 kubelet[2671]: I0711 05:12:41.834143 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9037642c-15b5-4b73-8ef3-208cf2ded088-flexvol-driver-host\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.834174 kubelet[2671]: I0711 05:12:41.834160 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9037642c-15b5-4b73-8ef3-208cf2ded088-lib-modules\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.834272 kubelet[2671]: I0711 05:12:41.834173 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9037642c-15b5-4b73-8ef3-208cf2ded088-var-lib-calico\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.834272 kubelet[2671]: I0711 05:12:41.834188 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjgrx\" (UniqueName: \"kubernetes.io/projected/9037642c-15b5-4b73-8ef3-208cf2ded088-kube-api-access-xjgrx\") pod \"calico-node-g4r5n\" (UID: \"9037642c-15b5-4b73-8ef3-208cf2ded088\") " pod="calico-system/calico-node-g4r5n" Jul 11 05:12:41.855851 containerd[1533]: time="2025-07-11T05:12:41.855736158Z" level=info msg="connecting to shim b75f4bb446a8e5313080e993677d0317568bbfaaa4a16418dda75d58b7e5f8a2" address="unix:///run/containerd/s/084b9e793c2d5faf1d43ee5d04f7eaca1dd8980b26cc15f0e58c7c10ef30374f" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:12:41.926135 systemd[1]: Started cri-containerd-b75f4bb446a8e5313080e993677d0317568bbfaaa4a16418dda75d58b7e5f8a2.scope - libcontainer container b75f4bb446a8e5313080e993677d0317568bbfaaa4a16418dda75d58b7e5f8a2. Jul 11 05:12:41.945231 kubelet[2671]: E0711 05:12:41.945202 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:41.945347 kubelet[2671]: W0711 05:12:41.945224 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:41.945347 kubelet[2671]: E0711 05:12:41.945268 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:41.951116 kubelet[2671]: E0711 05:12:41.951091 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:41.951116 kubelet[2671]: W0711 05:12:41.951109 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:41.951305 kubelet[2671]: E0711 05:12:41.951125 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:41.985173 containerd[1533]: time="2025-07-11T05:12:41.985133797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c6956c55c-kbf9j,Uid:8ffb0290-4f45-4fda-8a24-974845d40a5b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b75f4bb446a8e5313080e993677d0317568bbfaaa4a16418dda75d58b7e5f8a2\"" Jul 11 05:12:41.986886 containerd[1533]: time="2025-07-11T05:12:41.986855579Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 11 05:12:42.074089 kubelet[2671]: E0711 05:12:42.073938 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv579" podUID="23f37a14-7ca7-435f-adc0-f7dd1f70c437" Jul 11 05:12:42.094799 containerd[1533]: time="2025-07-11T05:12:42.094766745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g4r5n,Uid:9037642c-15b5-4b73-8ef3-208cf2ded088,Namespace:calico-system,Attempt:0,}" Jul 11 05:12:42.120673 containerd[1533]: time="2025-07-11T05:12:42.120166394Z" level=info msg="connecting to shim ce017d15eaf6cc203570b46b2c4d7f8a25c3209d06c55c4b9f7a07a75c0a94c9" address="unix:///run/containerd/s/21e11f04ffe6ee149eb4e5fb936f7112184af4e99ca78c7cb1774d2d1a2da889" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:12:42.122387 kubelet[2671]: E0711 05:12:42.122358 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.122387 kubelet[2671]: W0711 05:12:42.122382 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.122490 kubelet[2671]: E0711 05:12:42.122405 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.122568 kubelet[2671]: E0711 05:12:42.122553 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.122619 kubelet[2671]: W0711 05:12:42.122566 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.122619 kubelet[2671]: E0711 05:12:42.122608 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.123089 kubelet[2671]: E0711 05:12:42.123069 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.123089 kubelet[2671]: W0711 05:12:42.123085 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.123557 kubelet[2671]: E0711 05:12:42.123096 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.123557 kubelet[2671]: E0711 05:12:42.123287 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.123557 kubelet[2671]: W0711 05:12:42.123295 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.123557 kubelet[2671]: E0711 05:12:42.123303 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.123557 kubelet[2671]: E0711 05:12:42.123465 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.123557 kubelet[2671]: W0711 05:12:42.123474 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.123557 kubelet[2671]: E0711 05:12:42.123482 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.123759 kubelet[2671]: E0711 05:12:42.123609 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.123759 kubelet[2671]: W0711 05:12:42.123616 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.123759 kubelet[2671]: E0711 05:12:42.123623 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.123759 kubelet[2671]: E0711 05:12:42.123753 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.123759 kubelet[2671]: W0711 05:12:42.123760 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.123992 kubelet[2671]: E0711 05:12:42.123770 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.123992 kubelet[2671]: E0711 05:12:42.123899 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.123992 kubelet[2671]: W0711 05:12:42.123906 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.123992 kubelet[2671]: E0711 05:12:42.123913 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.124178 kubelet[2671]: E0711 05:12:42.124072 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.124178 kubelet[2671]: W0711 05:12:42.124080 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.124178 kubelet[2671]: E0711 05:12:42.124088 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.124260 kubelet[2671]: E0711 05:12:42.124209 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.124260 kubelet[2671]: W0711 05:12:42.124216 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.124260 kubelet[2671]: E0711 05:12:42.124223 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.124260 kubelet[2671]: E0711 05:12:42.124336 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.124260 kubelet[2671]: W0711 05:12:42.124343 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.124260 kubelet[2671]: E0711 05:12:42.124350 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.124531 kubelet[2671]: E0711 05:12:42.124497 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.124531 kubelet[2671]: W0711 05:12:42.124514 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.124531 kubelet[2671]: E0711 05:12:42.124522 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.124669 kubelet[2671]: E0711 05:12:42.124655 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.124723 kubelet[2671]: W0711 05:12:42.124665 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.124751 kubelet[2671]: E0711 05:12:42.124719 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.125428 kubelet[2671]: E0711 05:12:42.125029 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.125428 kubelet[2671]: W0711 05:12:42.125040 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.125428 kubelet[2671]: E0711 05:12:42.125050 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.125864 kubelet[2671]: E0711 05:12:42.125843 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.125864 kubelet[2671]: W0711 05:12:42.125860 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.126047 kubelet[2671]: E0711 05:12:42.125872 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.126047 kubelet[2671]: E0711 05:12:42.126038 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.126047 kubelet[2671]: W0711 05:12:42.126046 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.126191 kubelet[2671]: E0711 05:12:42.126054 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.126243 kubelet[2671]: E0711 05:12:42.126230 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.126243 kubelet[2671]: W0711 05:12:42.126239 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.126243 kubelet[2671]: E0711 05:12:42.126248 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.126387 kubelet[2671]: E0711 05:12:42.126373 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.126387 kubelet[2671]: W0711 05:12:42.126384 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.126455 kubelet[2671]: E0711 05:12:42.126393 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.126520 kubelet[2671]: E0711 05:12:42.126508 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.126520 kubelet[2671]: W0711 05:12:42.126518 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.126573 kubelet[2671]: E0711 05:12:42.126525 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.126646 kubelet[2671]: E0711 05:12:42.126635 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.126646 kubelet[2671]: W0711 05:12:42.126645 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.126732 kubelet[2671]: E0711 05:12:42.126653 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.136074 kubelet[2671]: E0711 05:12:42.136057 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.136220 kubelet[2671]: W0711 05:12:42.136145 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.136220 kubelet[2671]: E0711 05:12:42.136163 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.136434 kubelet[2671]: I0711 05:12:42.136204 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/23f37a14-7ca7-435f-adc0-f7dd1f70c437-registration-dir\") pod \"csi-node-driver-sv579\" (UID: \"23f37a14-7ca7-435f-adc0-f7dd1f70c437\") " pod="calico-system/csi-node-driver-sv579" Jul 11 05:12:42.136539 kubelet[2671]: E0711 05:12:42.136526 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.136650 kubelet[2671]: W0711 05:12:42.136598 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.136650 kubelet[2671]: E0711 05:12:42.136619 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.136940 kubelet[2671]: E0711 05:12:42.136900 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.136940 kubelet[2671]: W0711 05:12:42.136913 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.137070 kubelet[2671]: E0711 05:12:42.136930 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.137187 kubelet[2671]: E0711 05:12:42.137170 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.137214 kubelet[2671]: W0711 05:12:42.137186 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.137214 kubelet[2671]: E0711 05:12:42.137199 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.137299 kubelet[2671]: I0711 05:12:42.137225 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/23f37a14-7ca7-435f-adc0-f7dd1f70c437-varrun\") pod \"csi-node-driver-sv579\" (UID: \"23f37a14-7ca7-435f-adc0-f7dd1f70c437\") " pod="calico-system/csi-node-driver-sv579" Jul 11 05:12:42.137453 kubelet[2671]: E0711 05:12:42.137442 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.137486 kubelet[2671]: W0711 05:12:42.137453 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.137486 kubelet[2671]: E0711 05:12:42.137468 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.137486 kubelet[2671]: I0711 05:12:42.137483 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/23f37a14-7ca7-435f-adc0-f7dd1f70c437-socket-dir\") pod \"csi-node-driver-sv579\" (UID: \"23f37a14-7ca7-435f-adc0-f7dd1f70c437\") " pod="calico-system/csi-node-driver-sv579" Jul 11 05:12:42.137716 kubelet[2671]: E0711 05:12:42.137694 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.137716 kubelet[2671]: W0711 05:12:42.137715 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.137783 kubelet[2671]: E0711 05:12:42.137732 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.137783 kubelet[2671]: I0711 05:12:42.137747 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/23f37a14-7ca7-435f-adc0-f7dd1f70c437-kubelet-dir\") pod \"csi-node-driver-sv579\" (UID: \"23f37a14-7ca7-435f-adc0-f7dd1f70c437\") " pod="calico-system/csi-node-driver-sv579" Jul 11 05:12:42.137941 kubelet[2671]: E0711 05:12:42.137926 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.137941 kubelet[2671]: W0711 05:12:42.137939 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.138002 kubelet[2671]: E0711 05:12:42.137955 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.138002 kubelet[2671]: I0711 05:12:42.137982 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krxkl\" (UniqueName: \"kubernetes.io/projected/23f37a14-7ca7-435f-adc0-f7dd1f70c437-kube-api-access-krxkl\") pod \"csi-node-driver-sv579\" (UID: \"23f37a14-7ca7-435f-adc0-f7dd1f70c437\") " pod="calico-system/csi-node-driver-sv579" Jul 11 05:12:42.138177 kubelet[2671]: E0711 05:12:42.138162 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.138222 kubelet[2671]: W0711 05:12:42.138177 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.138222 kubelet[2671]: E0711 05:12:42.138192 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.138374 kubelet[2671]: E0711 05:12:42.138362 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.138374 kubelet[2671]: W0711 05:12:42.138374 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.138433 kubelet[2671]: E0711 05:12:42.138388 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.138773 kubelet[2671]: E0711 05:12:42.138637 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.138773 kubelet[2671]: W0711 05:12:42.138744 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.138773 kubelet[2671]: E0711 05:12:42.138766 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.139603 kubelet[2671]: E0711 05:12:42.139410 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.139603 kubelet[2671]: W0711 05:12:42.139427 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.139879 kubelet[2671]: E0711 05:12:42.139466 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.147164 systemd[1]: Started cri-containerd-ce017d15eaf6cc203570b46b2c4d7f8a25c3209d06c55c4b9f7a07a75c0a94c9.scope - libcontainer container ce017d15eaf6cc203570b46b2c4d7f8a25c3209d06c55c4b9f7a07a75c0a94c9. Jul 11 05:12:42.147683 kubelet[2671]: E0711 05:12:42.147660 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.147683 kubelet[2671]: W0711 05:12:42.147677 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.147868 kubelet[2671]: E0711 05:12:42.147834 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.147868 kubelet[2671]: W0711 05:12:42.147842 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.147868 kubelet[2671]: E0711 05:12:42.147855 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.147868 kubelet[2671]: E0711 05:12:42.147903 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.147868 kubelet[2671]: E0711 05:12:42.147960 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.147868 kubelet[2671]: W0711 05:12:42.148019 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.147868 kubelet[2671]: E0711 05:12:42.148075 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.148876 kubelet[2671]: E0711 05:12:42.148338 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.148876 kubelet[2671]: W0711 05:12:42.148355 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.148876 kubelet[2671]: E0711 05:12:42.148365 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.206745 containerd[1533]: time="2025-07-11T05:12:42.206696551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g4r5n,Uid:9037642c-15b5-4b73-8ef3-208cf2ded088,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce017d15eaf6cc203570b46b2c4d7f8a25c3209d06c55c4b9f7a07a75c0a94c9\"" Jul 11 05:12:42.239102 kubelet[2671]: E0711 05:12:42.239075 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.239102 kubelet[2671]: W0711 05:12:42.239097 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.239273 kubelet[2671]: E0711 05:12:42.239117 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.239327 kubelet[2671]: E0711 05:12:42.239322 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.239363 kubelet[2671]: W0711 05:12:42.239331 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.239363 kubelet[2671]: E0711 05:12:42.239346 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.239554 kubelet[2671]: E0711 05:12:42.239543 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.239554 kubelet[2671]: W0711 05:12:42.239554 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.239609 kubelet[2671]: E0711 05:12:42.239567 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.239747 kubelet[2671]: E0711 05:12:42.239734 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.239747 kubelet[2671]: W0711 05:12:42.239746 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.239814 kubelet[2671]: E0711 05:12:42.239761 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.239979 kubelet[2671]: E0711 05:12:42.239952 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.239979 kubelet[2671]: W0711 05:12:42.239964 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.240079 kubelet[2671]: E0711 05:12:42.239993 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.240166 kubelet[2671]: E0711 05:12:42.240153 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.240166 kubelet[2671]: W0711 05:12:42.240165 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.240214 kubelet[2671]: E0711 05:12:42.240179 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.240330 kubelet[2671]: E0711 05:12:42.240317 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.240330 kubelet[2671]: W0711 05:12:42.240327 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.240378 kubelet[2671]: E0711 05:12:42.240343 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.240522 kubelet[2671]: E0711 05:12:42.240508 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.240522 kubelet[2671]: W0711 05:12:42.240521 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.240602 kubelet[2671]: E0711 05:12:42.240546 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.240677 kubelet[2671]: E0711 05:12:42.240663 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.240677 kubelet[2671]: W0711 05:12:42.240676 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.240757 kubelet[2671]: E0711 05:12:42.240696 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.240835 kubelet[2671]: E0711 05:12:42.240822 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.240835 kubelet[2671]: W0711 05:12:42.240835 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.240896 kubelet[2671]: E0711 05:12:42.240859 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.240985 kubelet[2671]: E0711 05:12:42.240961 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.240985 kubelet[2671]: W0711 05:12:42.240985 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.241036 kubelet[2671]: E0711 05:12:42.241006 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.241130 kubelet[2671]: E0711 05:12:42.241117 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.241130 kubelet[2671]: W0711 05:12:42.241129 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.241130 kubelet[2671]: E0711 05:12:42.241162 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.241294 kubelet[2671]: E0711 05:12:42.241285 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.241294 kubelet[2671]: W0711 05:12:42.241292 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.241466 kubelet[2671]: E0711 05:12:42.241308 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.241562 kubelet[2671]: E0711 05:12:42.241547 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.241643 kubelet[2671]: W0711 05:12:42.241631 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.241816 kubelet[2671]: E0711 05:12:42.241733 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.241932 kubelet[2671]: E0711 05:12:42.241920 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.242007 kubelet[2671]: W0711 05:12:42.241996 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.242111 kubelet[2671]: E0711 05:12:42.242073 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.242326 kubelet[2671]: E0711 05:12:42.242290 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.242326 kubelet[2671]: W0711 05:12:42.242308 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.242326 kubelet[2671]: E0711 05:12:42.242326 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.242506 kubelet[2671]: E0711 05:12:42.242494 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.242506 kubelet[2671]: W0711 05:12:42.242505 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.242607 kubelet[2671]: E0711 05:12:42.242577 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.243026 kubelet[2671]: E0711 05:12:42.243007 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.243026 kubelet[2671]: W0711 05:12:42.243025 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.243095 kubelet[2671]: E0711 05:12:42.243059 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.243201 kubelet[2671]: E0711 05:12:42.243189 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.243201 kubelet[2671]: W0711 05:12:42.243200 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.243303 kubelet[2671]: E0711 05:12:42.243266 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.243464 kubelet[2671]: E0711 05:12:42.243445 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.243464 kubelet[2671]: W0711 05:12:42.243461 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.243567 kubelet[2671]: E0711 05:12:42.243488 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.243642 kubelet[2671]: E0711 05:12:42.243625 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.243642 kubelet[2671]: W0711 05:12:42.243637 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.243692 kubelet[2671]: E0711 05:12:42.243651 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.243819 kubelet[2671]: E0711 05:12:42.243805 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.243951 kubelet[2671]: W0711 05:12:42.243819 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.243951 kubelet[2671]: E0711 05:12:42.243832 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.244267 kubelet[2671]: E0711 05:12:42.244131 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.244267 kubelet[2671]: W0711 05:12:42.244145 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.244267 kubelet[2671]: E0711 05:12:42.244162 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.244415 kubelet[2671]: E0711 05:12:42.244403 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.244463 kubelet[2671]: W0711 05:12:42.244452 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.244556 kubelet[2671]: E0711 05:12:42.244544 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.245710 kubelet[2671]: E0711 05:12:42.245686 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.245710 kubelet[2671]: W0711 05:12:42.245703 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.245786 kubelet[2671]: E0711 05:12:42.245716 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:42.251546 kubelet[2671]: E0711 05:12:42.251490 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:42.251546 kubelet[2671]: W0711 05:12:42.251504 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:42.251546 kubelet[2671]: E0711 05:12:42.251517 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:43.187229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount31287873.mount: Deactivated successfully. Jul 11 05:12:43.821148 kubelet[2671]: E0711 05:12:43.821098 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv579" podUID="23f37a14-7ca7-435f-adc0-f7dd1f70c437" Jul 11 05:12:43.827301 containerd[1533]: time="2025-07-11T05:12:43.827254777Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:43.832797 containerd[1533]: time="2025-07-11T05:12:43.832766178Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33087207" Jul 11 05:12:43.833792 containerd[1533]: time="2025-07-11T05:12:43.833762737Z" level=info msg="ImageCreate event name:\"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:43.835364 containerd[1533]: time="2025-07-11T05:12:43.835329668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:43.836194 containerd[1533]: time="2025-07-11T05:12:43.836167722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"33087061\" in 1.848792772s" Jul 11 05:12:43.836238 containerd[1533]: time="2025-07-11T05:12:43.836198967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:bd819526ff844d29b60cd75e846a1f55306016ff269d881d52a9b6c7b2eef0b2\"" Jul 11 05:12:43.838729 containerd[1533]: time="2025-07-11T05:12:43.838695126Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 11 05:12:43.847876 containerd[1533]: time="2025-07-11T05:12:43.847459486Z" level=info msg="CreateContainer within sandbox \"b75f4bb446a8e5313080e993677d0317568bbfaaa4a16418dda75d58b7e5f8a2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 11 05:12:43.857247 containerd[1533]: time="2025-07-11T05:12:43.857212725Z" level=info msg="Container d71bc012cbcf546b3c00a8d45667945e319f097f31197ff4845aeff3b65061e4: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:12:43.872301 containerd[1533]: time="2025-07-11T05:12:43.872209322Z" level=info msg="CreateContainer within sandbox \"b75f4bb446a8e5313080e993677d0317568bbfaaa4a16418dda75d58b7e5f8a2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d71bc012cbcf546b3c00a8d45667945e319f097f31197ff4845aeff3b65061e4\"" Jul 11 05:12:43.872797 containerd[1533]: time="2025-07-11T05:12:43.872775173Z" level=info msg="StartContainer for \"d71bc012cbcf546b3c00a8d45667945e319f097f31197ff4845aeff3b65061e4\"" Jul 11 05:12:43.874293 containerd[1533]: time="2025-07-11T05:12:43.874257690Z" level=info msg="connecting to shim d71bc012cbcf546b3c00a8d45667945e319f097f31197ff4845aeff3b65061e4" address="unix:///run/containerd/s/084b9e793c2d5faf1d43ee5d04f7eaca1dd8980b26cc15f0e58c7c10ef30374f" protocol=ttrpc version=3 Jul 11 05:12:43.899159 systemd[1]: Started cri-containerd-d71bc012cbcf546b3c00a8d45667945e319f097f31197ff4845aeff3b65061e4.scope - libcontainer container d71bc012cbcf546b3c00a8d45667945e319f097f31197ff4845aeff3b65061e4. Jul 11 05:12:43.939236 containerd[1533]: time="2025-07-11T05:12:43.939201029Z" level=info msg="StartContainer for \"d71bc012cbcf546b3c00a8d45667945e319f097f31197ff4845aeff3b65061e4\" returns successfully" Jul 11 05:12:44.931773 containerd[1533]: time="2025-07-11T05:12:44.931718544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:44.932286 containerd[1533]: time="2025-07-11T05:12:44.932240944Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4266981" Jul 11 05:12:44.933743 containerd[1533]: time="2025-07-11T05:12:44.933409202Z" level=info msg="ImageCreate event name:\"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:44.938361 kubelet[2671]: I0711 05:12:44.938292 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c6956c55c-kbf9j" podStartSLOduration=2.086120023 podStartE2EDuration="3.938259064s" podCreationTimestamp="2025-07-11 05:12:41 +0000 UTC" firstStartedPulling="2025-07-11 05:12:41.986421423 +0000 UTC m=+18.244870622" lastFinishedPulling="2025-07-11 05:12:43.838560424 +0000 UTC m=+20.097009663" observedRunningTime="2025-07-11 05:12:44.934287457 +0000 UTC m=+21.192736696" watchObservedRunningTime="2025-07-11 05:12:44.938259064 +0000 UTC m=+21.196708303" Jul 11 05:12:44.939680 containerd[1533]: time="2025-07-11T05:12:44.939646596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:44.941681 kubelet[2671]: E0711 05:12:44.941661 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.941728 kubelet[2671]: W0711 05:12:44.941682 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.941728 kubelet[2671]: E0711 05:12:44.941700 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.941864 kubelet[2671]: E0711 05:12:44.941853 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.941909 kubelet[2671]: W0711 05:12:44.941864 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.941909 kubelet[2671]: E0711 05:12:44.941906 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.942052 kubelet[2671]: E0711 05:12:44.942041 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.942052 kubelet[2671]: W0711 05:12:44.942052 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.942113 kubelet[2671]: E0711 05:12:44.942061 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.942209 kubelet[2671]: E0711 05:12:44.942199 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.942209 kubelet[2671]: W0711 05:12:44.942208 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.942268 kubelet[2671]: E0711 05:12:44.942217 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.942356 kubelet[2671]: E0711 05:12:44.942346 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.942356 kubelet[2671]: W0711 05:12:44.942356 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.942410 kubelet[2671]: E0711 05:12:44.942364 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.942494 kubelet[2671]: E0711 05:12:44.942484 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.942494 kubelet[2671]: W0711 05:12:44.942494 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.942548 kubelet[2671]: E0711 05:12:44.942501 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.942626 kubelet[2671]: E0711 05:12:44.942616 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.942656 kubelet[2671]: W0711 05:12:44.942627 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.942656 kubelet[2671]: E0711 05:12:44.942635 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.942763 kubelet[2671]: E0711 05:12:44.942753 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.942794 kubelet[2671]: W0711 05:12:44.942765 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.942794 kubelet[2671]: E0711 05:12:44.942773 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.942910 kubelet[2671]: E0711 05:12:44.942899 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.942910 kubelet[2671]: W0711 05:12:44.942909 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.942977 kubelet[2671]: E0711 05:12:44.942917 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.943114 kubelet[2671]: E0711 05:12:44.943101 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.943148 kubelet[2671]: W0711 05:12:44.943113 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.943148 kubelet[2671]: E0711 05:12:44.943125 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.943278 kubelet[2671]: E0711 05:12:44.943262 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.943278 kubelet[2671]: W0711 05:12:44.943274 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.943353 kubelet[2671]: E0711 05:12:44.943283 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.943433 kubelet[2671]: E0711 05:12:44.943422 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.943433 kubelet[2671]: W0711 05:12:44.943431 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.943484 kubelet[2671]: E0711 05:12:44.943439 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.943613 kubelet[2671]: E0711 05:12:44.943600 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.943613 kubelet[2671]: W0711 05:12:44.943610 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.943707 kubelet[2671]: E0711 05:12:44.943618 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.944396 kubelet[2671]: E0711 05:12:44.944292 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.944396 kubelet[2671]: W0711 05:12:44.944328 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.944396 kubelet[2671]: E0711 05:12:44.944343 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.944679 kubelet[2671]: E0711 05:12:44.944663 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.944798 kubelet[2671]: W0711 05:12:44.944734 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.944798 kubelet[2671]: E0711 05:12:44.944750 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.950671 containerd[1533]: time="2025-07-11T05:12:44.950563504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5636182\" in 1.111835293s" Jul 11 05:12:44.950671 containerd[1533]: time="2025-07-11T05:12:44.950606711Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:53f638101e3d73f7dd5e42dc42fb3d94ae1978e8958677222c3de6ec1d8c3d4f\"" Jul 11 05:12:44.953442 containerd[1533]: time="2025-07-11T05:12:44.953383175Z" level=info msg="CreateContainer within sandbox \"ce017d15eaf6cc203570b46b2c4d7f8a25c3209d06c55c4b9f7a07a75c0a94c9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 11 05:12:44.956654 kubelet[2671]: E0711 05:12:44.956635 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.956812 kubelet[2671]: W0711 05:12:44.956738 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.956812 kubelet[2671]: E0711 05:12:44.956759 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.957110 kubelet[2671]: E0711 05:12:44.957096 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.957262 kubelet[2671]: W0711 05:12:44.957166 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.957262 kubelet[2671]: E0711 05:12:44.957188 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.957401 kubelet[2671]: E0711 05:12:44.957376 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.957401 kubelet[2671]: W0711 05:12:44.957396 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.957472 kubelet[2671]: E0711 05:12:44.957412 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.957645 kubelet[2671]: E0711 05:12:44.957604 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.957645 kubelet[2671]: W0711 05:12:44.957618 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.957645 kubelet[2671]: E0711 05:12:44.957628 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.957865 kubelet[2671]: E0711 05:12:44.957848 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.957865 kubelet[2671]: W0711 05:12:44.957861 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.957960 kubelet[2671]: E0711 05:12:44.957876 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.958105 kubelet[2671]: E0711 05:12:44.958089 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.958105 kubelet[2671]: W0711 05:12:44.958101 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.958460 kubelet[2671]: E0711 05:12:44.958116 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.958716 kubelet[2671]: E0711 05:12:44.958652 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.958716 kubelet[2671]: W0711 05:12:44.958672 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.958716 kubelet[2671]: E0711 05:12:44.958692 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.958950 kubelet[2671]: E0711 05:12:44.958931 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.958950 kubelet[2671]: W0711 05:12:44.958946 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.959084 kubelet[2671]: E0711 05:12:44.958963 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.959155 kubelet[2671]: E0711 05:12:44.959124 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.959155 kubelet[2671]: W0711 05:12:44.959132 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.959250 kubelet[2671]: E0711 05:12:44.959175 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.959541 kubelet[2671]: E0711 05:12:44.959502 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.959541 kubelet[2671]: W0711 05:12:44.959527 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.959709 kubelet[2671]: E0711 05:12:44.959610 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.960233 kubelet[2671]: E0711 05:12:44.960219 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.960323 kubelet[2671]: W0711 05:12:44.960303 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.960608 kubelet[2671]: E0711 05:12:44.960483 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.960767 containerd[1533]: time="2025-07-11T05:12:44.960735258Z" level=info msg="Container 9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:12:44.960953 kubelet[2671]: E0711 05:12:44.960915 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.960953 kubelet[2671]: W0711 05:12:44.960933 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.962028 kubelet[2671]: E0711 05:12:44.961923 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.962895 kubelet[2671]: E0711 05:12:44.962446 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.963037 kubelet[2671]: W0711 05:12:44.963007 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.963070 kubelet[2671]: E0711 05:12:44.963042 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.965049 kubelet[2671]: E0711 05:12:44.963369 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.965049 kubelet[2671]: W0711 05:12:44.963447 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.965049 kubelet[2671]: E0711 05:12:44.963493 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.965049 kubelet[2671]: E0711 05:12:44.964050 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.965049 kubelet[2671]: W0711 05:12:44.964065 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.965049 kubelet[2671]: E0711 05:12:44.964100 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.965452 kubelet[2671]: E0711 05:12:44.965434 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.965452 kubelet[2671]: W0711 05:12:44.965451 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.965520 kubelet[2671]: E0711 05:12:44.965474 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.965712 kubelet[2671]: E0711 05:12:44.965695 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.965742 kubelet[2671]: W0711 05:12:44.965713 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.965951 kubelet[2671]: E0711 05:12:44.965936 2671 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 11 05:12:44.966005 kubelet[2671]: W0711 05:12:44.965951 2671 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 11 05:12:44.966005 kubelet[2671]: E0711 05:12:44.965962 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.966096 kubelet[2671]: E0711 05:12:44.966027 2671 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 11 05:12:44.971982 containerd[1533]: time="2025-07-11T05:12:44.971855638Z" level=info msg="CreateContainer within sandbox \"ce017d15eaf6cc203570b46b2c4d7f8a25c3209d06c55c4b9f7a07a75c0a94c9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb\"" Jul 11 05:12:44.973004 containerd[1533]: time="2025-07-11T05:12:44.972960487Z" level=info msg="StartContainer for \"9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb\"" Jul 11 05:12:44.975707 containerd[1533]: time="2025-07-11T05:12:44.975668861Z" level=info msg="connecting to shim 9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb" address="unix:///run/containerd/s/21e11f04ffe6ee149eb4e5fb936f7112184af4e99ca78c7cb1774d2d1a2da889" protocol=ttrpc version=3 Jul 11 05:12:45.006125 systemd[1]: Started cri-containerd-9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb.scope - libcontainer container 9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb. Jul 11 05:12:45.036933 containerd[1533]: time="2025-07-11T05:12:45.036875499Z" level=info msg="StartContainer for \"9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb\" returns successfully" Jul 11 05:12:45.068850 systemd[1]: cri-containerd-9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb.scope: Deactivated successfully. Jul 11 05:12:45.069192 systemd[1]: cri-containerd-9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb.scope: Consumed 49ms CPU time, 6.1M memory peak, 4.5M written to disk. Jul 11 05:12:45.086495 containerd[1533]: time="2025-07-11T05:12:45.086386780Z" level=info msg="received exit event container_id:\"9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb\" id:\"9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb\" pid:3366 exited_at:{seconds:1752210765 nanos:81799629}" Jul 11 05:12:45.086706 containerd[1533]: time="2025-07-11T05:12:45.086454110Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb\" id:\"9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb\" pid:3366 exited_at:{seconds:1752210765 nanos:81799629}" Jul 11 05:12:45.135993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9094c0b32f77d1986df11d90bab6a0662f4fa99a888e6a626f22f896b5f1fdfb-rootfs.mount: Deactivated successfully. Jul 11 05:12:45.816813 kubelet[2671]: E0711 05:12:45.816741 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv579" podUID="23f37a14-7ca7-435f-adc0-f7dd1f70c437" Jul 11 05:12:45.927922 kubelet[2671]: I0711 05:12:45.927895 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 05:12:45.930207 containerd[1533]: time="2025-07-11T05:12:45.930162390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 11 05:12:47.816848 kubelet[2671]: E0711 05:12:47.816755 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-sv579" podUID="23f37a14-7ca7-435f-adc0-f7dd1f70c437" Jul 11 05:12:48.672311 containerd[1533]: time="2025-07-11T05:12:48.672255125Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:48.673433 containerd[1533]: time="2025-07-11T05:12:48.673398552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=65888320" Jul 11 05:12:48.674241 containerd[1533]: time="2025-07-11T05:12:48.674209177Z" level=info msg="ImageCreate event name:\"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:48.676247 containerd[1533]: time="2025-07-11T05:12:48.676213515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:48.676879 containerd[1533]: time="2025-07-11T05:12:48.676846517Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"67257561\" in 2.746644162s" Jul 11 05:12:48.676908 containerd[1533]: time="2025-07-11T05:12:48.676876881Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:f6e344d58b3c5524e767c7d1dd4cb29c85ce820b0f3005a603532b6a22db5588\"" Jul 11 05:12:48.681157 containerd[1533]: time="2025-07-11T05:12:48.681126309Z" level=info msg="CreateContainer within sandbox \"ce017d15eaf6cc203570b46b2c4d7f8a25c3209d06c55c4b9f7a07a75c0a94c9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 11 05:12:48.690269 containerd[1533]: time="2025-07-11T05:12:48.689206430Z" level=info msg="Container bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:12:48.697054 containerd[1533]: time="2025-07-11T05:12:48.697018038Z" level=info msg="CreateContainer within sandbox \"ce017d15eaf6cc203570b46b2c4d7f8a25c3209d06c55c4b9f7a07a75c0a94c9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75\"" Jul 11 05:12:48.697721 containerd[1533]: time="2025-07-11T05:12:48.697695485Z" level=info msg="StartContainer for \"bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75\"" Jul 11 05:12:48.698950 containerd[1533]: time="2025-07-11T05:12:48.698923603Z" level=info msg="connecting to shim bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75" address="unix:///run/containerd/s/21e11f04ffe6ee149eb4e5fb936f7112184af4e99ca78c7cb1774d2d1a2da889" protocol=ttrpc version=3 Jul 11 05:12:48.720199 systemd[1]: Started cri-containerd-bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75.scope - libcontainer container bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75. Jul 11 05:12:48.752705 containerd[1533]: time="2025-07-11T05:12:48.752649530Z" level=info msg="StartContainer for \"bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75\" returns successfully" Jul 11 05:12:49.291201 systemd[1]: cri-containerd-bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75.scope: Deactivated successfully. Jul 11 05:12:49.291540 systemd[1]: cri-containerd-bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75.scope: Consumed 468ms CPU time, 177.5M memory peak, 1.9M read from disk, 165.8M written to disk. Jul 11 05:12:49.292507 containerd[1533]: time="2025-07-11T05:12:49.292472766Z" level=info msg="received exit event container_id:\"bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75\" id:\"bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75\" pid:3427 exited_at:{seconds:1752210769 nanos:292136004}" Jul 11 05:12:49.293067 containerd[1533]: time="2025-07-11T05:12:49.292963867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75\" id:\"bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75\" pid:3427 exited_at:{seconds:1752210769 nanos:292136004}" Jul 11 05:12:49.325518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bef27720c34511243f291d362b8db14aec126558eee232d258393ae1865aca75-rootfs.mount: Deactivated successfully. Jul 11 05:12:49.341082 kubelet[2671]: I0711 05:12:49.340809 2671 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 11 05:12:49.447645 systemd[1]: Created slice kubepods-burstable-podddb2fcba_eef7_4cc0_812c_e6293859c9a8.slice - libcontainer container kubepods-burstable-podddb2fcba_eef7_4cc0_812c_e6293859c9a8.slice. Jul 11 05:12:49.455727 systemd[1]: Created slice kubepods-besteffort-pod46dce721_fca6_4fb3_90cb_6ea1278e55d3.slice - libcontainer container kubepods-besteffort-pod46dce721_fca6_4fb3_90cb_6ea1278e55d3.slice. Jul 11 05:12:49.463508 systemd[1]: Created slice kubepods-burstable-pod21ff705d_19fe_42bb_bec5_e77722b62149.slice - libcontainer container kubepods-burstable-pod21ff705d_19fe_42bb_bec5_e77722b62149.slice. Jul 11 05:12:49.470532 systemd[1]: Created slice kubepods-besteffort-pod026e934c_f996_4488_b32e_961464e3c433.slice - libcontainer container kubepods-besteffort-pod026e934c_f996_4488_b32e_961464e3c433.slice. Jul 11 05:12:49.478822 systemd[1]: Created slice kubepods-besteffort-pod2793a114_a4b6_453d_99f4_97bf5b7e5a79.slice - libcontainer container kubepods-besteffort-pod2793a114_a4b6_453d_99f4_97bf5b7e5a79.slice. Jul 11 05:12:49.499091 systemd[1]: Created slice kubepods-besteffort-pod4be52158_ba8a_4d65_84d5_8d5850aa0e12.slice - libcontainer container kubepods-besteffort-pod4be52158_ba8a_4d65_84d5_8d5850aa0e12.slice. Jul 11 05:12:49.504246 systemd[1]: Created slice kubepods-besteffort-pod4a97364c_368b_41d9_83d8_fb76bf76e4bb.slice - libcontainer container kubepods-besteffort-pod4a97364c_368b_41d9_83d8_fb76bf76e4bb.slice. Jul 11 05:12:49.591179 kubelet[2671]: I0711 05:12:49.591066 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2793a114-a4b6-453d-99f4-97bf5b7e5a79-whisker-ca-bundle\") pod \"whisker-6cdf89dcb9-q2qmn\" (UID: \"2793a114-a4b6-453d-99f4-97bf5b7e5a79\") " pod="calico-system/whisker-6cdf89dcb9-q2qmn" Jul 11 05:12:49.591179 kubelet[2671]: I0711 05:12:49.591112 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frm92\" (UniqueName: \"kubernetes.io/projected/4a97364c-368b-41d9-83d8-fb76bf76e4bb-kube-api-access-frm92\") pod \"goldmane-768f4c5c69-45rdt\" (UID: \"4a97364c-368b-41d9-83d8-fb76bf76e4bb\") " pod="calico-system/goldmane-768f4c5c69-45rdt" Jul 11 05:12:49.591179 kubelet[2671]: I0711 05:12:49.591131 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/46dce721-fca6-4fb3-90cb-6ea1278e55d3-calico-apiserver-certs\") pod \"calico-apiserver-6b75654db8-xnd5b\" (UID: \"46dce721-fca6-4fb3-90cb-6ea1278e55d3\") " pod="calico-apiserver/calico-apiserver-6b75654db8-xnd5b" Jul 11 05:12:49.591179 kubelet[2671]: I0711 05:12:49.591149 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zxqk\" (UniqueName: \"kubernetes.io/projected/4be52158-ba8a-4d65-84d5-8d5850aa0e12-kube-api-access-5zxqk\") pod \"calico-apiserver-6b75654db8-h4sk4\" (UID: \"4be52158-ba8a-4d65-84d5-8d5850aa0e12\") " pod="calico-apiserver/calico-apiserver-6b75654db8-h4sk4" Jul 11 05:12:49.591361 kubelet[2671]: I0711 05:12:49.591326 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21ff705d-19fe-42bb-bec5-e77722b62149-config-volume\") pod \"coredns-668d6bf9bc-mmj54\" (UID: \"21ff705d-19fe-42bb-bec5-e77722b62149\") " pod="kube-system/coredns-668d6bf9bc-mmj54" Jul 11 05:12:49.591388 kubelet[2671]: I0711 05:12:49.591362 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-865dp\" (UniqueName: \"kubernetes.io/projected/ddb2fcba-eef7-4cc0-812c-e6293859c9a8-kube-api-access-865dp\") pod \"coredns-668d6bf9bc-252sd\" (UID: \"ddb2fcba-eef7-4cc0-812c-e6293859c9a8\") " pod="kube-system/coredns-668d6bf9bc-252sd" Jul 11 05:12:49.591413 kubelet[2671]: I0711 05:12:49.591387 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2793a114-a4b6-453d-99f4-97bf5b7e5a79-whisker-backend-key-pair\") pod \"whisker-6cdf89dcb9-q2qmn\" (UID: \"2793a114-a4b6-453d-99f4-97bf5b7e5a79\") " pod="calico-system/whisker-6cdf89dcb9-q2qmn" Jul 11 05:12:49.591413 kubelet[2671]: I0711 05:12:49.591407 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjjjx\" (UniqueName: \"kubernetes.io/projected/21ff705d-19fe-42bb-bec5-e77722b62149-kube-api-access-cjjjx\") pod \"coredns-668d6bf9bc-mmj54\" (UID: \"21ff705d-19fe-42bb-bec5-e77722b62149\") " pod="kube-system/coredns-668d6bf9bc-mmj54" Jul 11 05:12:49.591455 kubelet[2671]: I0711 05:12:49.591427 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4a97364c-368b-41d9-83d8-fb76bf76e4bb-goldmane-key-pair\") pod \"goldmane-768f4c5c69-45rdt\" (UID: \"4a97364c-368b-41d9-83d8-fb76bf76e4bb\") " pod="calico-system/goldmane-768f4c5c69-45rdt" Jul 11 05:12:49.591455 kubelet[2671]: I0711 05:12:49.591446 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpsxg\" (UniqueName: \"kubernetes.io/projected/46dce721-fca6-4fb3-90cb-6ea1278e55d3-kube-api-access-hpsxg\") pod \"calico-apiserver-6b75654db8-xnd5b\" (UID: \"46dce721-fca6-4fb3-90cb-6ea1278e55d3\") " pod="calico-apiserver/calico-apiserver-6b75654db8-xnd5b" Jul 11 05:12:49.591494 kubelet[2671]: I0711 05:12:49.591470 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tzcg\" (UniqueName: \"kubernetes.io/projected/2793a114-a4b6-453d-99f4-97bf5b7e5a79-kube-api-access-9tzcg\") pod \"whisker-6cdf89dcb9-q2qmn\" (UID: \"2793a114-a4b6-453d-99f4-97bf5b7e5a79\") " pod="calico-system/whisker-6cdf89dcb9-q2qmn" Jul 11 05:12:49.591519 kubelet[2671]: I0711 05:12:49.591491 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/026e934c-f996-4488-b32e-961464e3c433-tigera-ca-bundle\") pod \"calico-kube-controllers-5fc56655d8-fzxgs\" (UID: \"026e934c-f996-4488-b32e-961464e3c433\") " pod="calico-system/calico-kube-controllers-5fc56655d8-fzxgs" Jul 11 05:12:49.591519 kubelet[2671]: I0711 05:12:49.591513 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a97364c-368b-41d9-83d8-fb76bf76e4bb-config\") pod \"goldmane-768f4c5c69-45rdt\" (UID: \"4a97364c-368b-41d9-83d8-fb76bf76e4bb\") " pod="calico-system/goldmane-768f4c5c69-45rdt" Jul 11 05:12:49.591558 kubelet[2671]: I0711 05:12:49.591528 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4be52158-ba8a-4d65-84d5-8d5850aa0e12-calico-apiserver-certs\") pod \"calico-apiserver-6b75654db8-h4sk4\" (UID: \"4be52158-ba8a-4d65-84d5-8d5850aa0e12\") " pod="calico-apiserver/calico-apiserver-6b75654db8-h4sk4" Jul 11 05:12:49.591558 kubelet[2671]: I0711 05:12:49.591545 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ddb2fcba-eef7-4cc0-812c-e6293859c9a8-config-volume\") pod \"coredns-668d6bf9bc-252sd\" (UID: \"ddb2fcba-eef7-4cc0-812c-e6293859c9a8\") " pod="kube-system/coredns-668d6bf9bc-252sd" Jul 11 05:12:49.591597 kubelet[2671]: I0711 05:12:49.591559 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9djmb\" (UniqueName: \"kubernetes.io/projected/026e934c-f996-4488-b32e-961464e3c433-kube-api-access-9djmb\") pod \"calico-kube-controllers-5fc56655d8-fzxgs\" (UID: \"026e934c-f996-4488-b32e-961464e3c433\") " pod="calico-system/calico-kube-controllers-5fc56655d8-fzxgs" Jul 11 05:12:49.591597 kubelet[2671]: I0711 05:12:49.591577 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a97364c-368b-41d9-83d8-fb76bf76e4bb-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-45rdt\" (UID: \"4a97364c-368b-41d9-83d8-fb76bf76e4bb\") " pod="calico-system/goldmane-768f4c5c69-45rdt" Jul 11 05:12:49.753194 containerd[1533]: time="2025-07-11T05:12:49.753136939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-252sd,Uid:ddb2fcba-eef7-4cc0-812c-e6293859c9a8,Namespace:kube-system,Attempt:0,}" Jul 11 05:12:49.759947 containerd[1533]: time="2025-07-11T05:12:49.759739476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b75654db8-xnd5b,Uid:46dce721-fca6-4fb3-90cb-6ea1278e55d3,Namespace:calico-apiserver,Attempt:0,}" Jul 11 05:12:49.767370 containerd[1533]: time="2025-07-11T05:12:49.767336737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mmj54,Uid:21ff705d-19fe-42bb-bec5-e77722b62149,Namespace:kube-system,Attempt:0,}" Jul 11 05:12:49.786462 containerd[1533]: time="2025-07-11T05:12:49.786303406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc56655d8-fzxgs,Uid:026e934c-f996-4488-b32e-961464e3c433,Namespace:calico-system,Attempt:0,}" Jul 11 05:12:49.809671 containerd[1533]: time="2025-07-11T05:12:49.803150893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cdf89dcb9-q2qmn,Uid:2793a114-a4b6-453d-99f4-97bf5b7e5a79,Namespace:calico-system,Attempt:0,}" Jul 11 05:12:49.809671 containerd[1533]: time="2025-07-11T05:12:49.809577209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-45rdt,Uid:4a97364c-368b-41d9-83d8-fb76bf76e4bb,Namespace:calico-system,Attempt:0,}" Jul 11 05:12:49.809798 containerd[1533]: time="2025-07-11T05:12:49.809759991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b75654db8-h4sk4,Uid:4be52158-ba8a-4d65-84d5-8d5850aa0e12,Namespace:calico-apiserver,Attempt:0,}" Jul 11 05:12:49.885393 systemd[1]: Created slice kubepods-besteffort-pod23f37a14_7ca7_435f_adc0_f7dd1f70c437.slice - libcontainer container kubepods-besteffort-pod23f37a14_7ca7_435f_adc0_f7dd1f70c437.slice. Jul 11 05:12:49.913119 containerd[1533]: time="2025-07-11T05:12:49.910139263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv579,Uid:23f37a14-7ca7-435f-adc0-f7dd1f70c437,Namespace:calico-system,Attempt:0,}" Jul 11 05:12:49.963093 containerd[1533]: time="2025-07-11T05:12:49.963064178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 11 05:12:50.258582 containerd[1533]: time="2025-07-11T05:12:50.258314241Z" level=error msg="Failed to destroy network for sandbox \"ea8e9b2f405339841a230b98b626a28c0ed4ce1757454fa77c9d7459570ecfed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.259399 containerd[1533]: time="2025-07-11T05:12:50.259338843Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b75654db8-h4sk4,Uid:4be52158-ba8a-4d65-84d5-8d5850aa0e12,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea8e9b2f405339841a230b98b626a28c0ed4ce1757454fa77c9d7459570ecfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.259677 containerd[1533]: time="2025-07-11T05:12:50.259636798Z" level=error msg="Failed to destroy network for sandbox \"fcb30adfebb616a1fe9e587240984ad83eabfd4c46781a19d0568095c7029bf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.259762 kubelet[2671]: E0711 05:12:50.259712 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea8e9b2f405339841a230b98b626a28c0ed4ce1757454fa77c9d7459570ecfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.260225 containerd[1533]: time="2025-07-11T05:12:50.260038446Z" level=error msg="Failed to destroy network for sandbox \"7ebae26ca342700a1dbaa19ad8043c537b32e7e72264893a5b3befcb3211faef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.260951 containerd[1533]: time="2025-07-11T05:12:50.260829660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-45rdt,Uid:4a97364c-368b-41d9-83d8-fb76bf76e4bb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb30adfebb616a1fe9e587240984ad83eabfd4c46781a19d0568095c7029bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.261283 kubelet[2671]: E0711 05:12:50.261246 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb30adfebb616a1fe9e587240984ad83eabfd4c46781a19d0568095c7029bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.261354 kubelet[2671]: E0711 05:12:50.261304 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb30adfebb616a1fe9e587240984ad83eabfd4c46781a19d0568095c7029bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-45rdt" Jul 11 05:12:50.261354 kubelet[2671]: E0711 05:12:50.261326 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fcb30adfebb616a1fe9e587240984ad83eabfd4c46781a19d0568095c7029bf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-45rdt" Jul 11 05:12:50.261512 kubelet[2671]: E0711 05:12:50.261378 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-45rdt_calico-system(4a97364c-368b-41d9-83d8-fb76bf76e4bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-45rdt_calico-system(4a97364c-368b-41d9-83d8-fb76bf76e4bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fcb30adfebb616a1fe9e587240984ad83eabfd4c46781a19d0568095c7029bf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-45rdt" podUID="4a97364c-368b-41d9-83d8-fb76bf76e4bb" Jul 11 05:12:50.261960 containerd[1533]: time="2025-07-11T05:12:50.261909549Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv579,Uid:23f37a14-7ca7-435f-adc0-f7dd1f70c437,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ebae26ca342700a1dbaa19ad8043c537b32e7e72264893a5b3befcb3211faef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.262265 kubelet[2671]: E0711 05:12:50.262079 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea8e9b2f405339841a230b98b626a28c0ed4ce1757454fa77c9d7459570ecfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b75654db8-h4sk4" Jul 11 05:12:50.262265 kubelet[2671]: E0711 05:12:50.262119 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ea8e9b2f405339841a230b98b626a28c0ed4ce1757454fa77c9d7459570ecfed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b75654db8-h4sk4" Jul 11 05:12:50.262265 kubelet[2671]: E0711 05:12:50.262159 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b75654db8-h4sk4_calico-apiserver(4be52158-ba8a-4d65-84d5-8d5850aa0e12)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b75654db8-h4sk4_calico-apiserver(4be52158-ba8a-4d65-84d5-8d5850aa0e12)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ea8e9b2f405339841a230b98b626a28c0ed4ce1757454fa77c9d7459570ecfed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b75654db8-h4sk4" podUID="4be52158-ba8a-4d65-84d5-8d5850aa0e12" Jul 11 05:12:50.262568 kubelet[2671]: E0711 05:12:50.262265 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ebae26ca342700a1dbaa19ad8043c537b32e7e72264893a5b3befcb3211faef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.262568 kubelet[2671]: E0711 05:12:50.262295 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ebae26ca342700a1dbaa19ad8043c537b32e7e72264893a5b3befcb3211faef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sv579" Jul 11 05:12:50.262568 kubelet[2671]: E0711 05:12:50.262309 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ebae26ca342700a1dbaa19ad8043c537b32e7e72264893a5b3befcb3211faef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-sv579" Jul 11 05:12:50.262663 kubelet[2671]: E0711 05:12:50.262343 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-sv579_calico-system(23f37a14-7ca7-435f-adc0-f7dd1f70c437)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-sv579_calico-system(23f37a14-7ca7-435f-adc0-f7dd1f70c437)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ebae26ca342700a1dbaa19ad8043c537b32e7e72264893a5b3befcb3211faef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-sv579" podUID="23f37a14-7ca7-435f-adc0-f7dd1f70c437" Jul 11 05:12:50.270140 containerd[1533]: time="2025-07-11T05:12:50.270080842Z" level=error msg="Failed to destroy network for sandbox \"3546661e707dca8c592f7cb1f9c409e936fc1c2508e33e2619eadac40e536577\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.270461 containerd[1533]: time="2025-07-11T05:12:50.270401840Z" level=error msg="Failed to destroy network for sandbox \"072bde01928cfe7b765cc3a3eb690dd399e75f6ff9574459f98100d524b80d95\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.270852 containerd[1533]: time="2025-07-11T05:12:50.270815249Z" level=error msg="Failed to destroy network for sandbox \"8b6a3a36b950e9d4c0913c3bd0f968c00c24edcf53e45628caef016ea4ac1ac5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.271326 containerd[1533]: time="2025-07-11T05:12:50.271238340Z" level=error msg="Failed to destroy network for sandbox \"d01c74ff8cabf8d9b46a328f03cf974bd34262a1598b3b859498a90c4acef816\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.271552 containerd[1533]: time="2025-07-11T05:12:50.271517653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6cdf89dcb9-q2qmn,Uid:2793a114-a4b6-453d-99f4-97bf5b7e5a79,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"072bde01928cfe7b765cc3a3eb690dd399e75f6ff9574459f98100d524b80d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.271728 kubelet[2671]: E0711 05:12:50.271681 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072bde01928cfe7b765cc3a3eb690dd399e75f6ff9574459f98100d524b80d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.271728 kubelet[2671]: E0711 05:12:50.271722 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072bde01928cfe7b765cc3a3eb690dd399e75f6ff9574459f98100d524b80d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cdf89dcb9-q2qmn" Jul 11 05:12:50.271807 kubelet[2671]: E0711 05:12:50.271739 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"072bde01928cfe7b765cc3a3eb690dd399e75f6ff9574459f98100d524b80d95\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6cdf89dcb9-q2qmn" Jul 11 05:12:50.271807 kubelet[2671]: E0711 05:12:50.271777 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6cdf89dcb9-q2qmn_calico-system(2793a114-a4b6-453d-99f4-97bf5b7e5a79)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6cdf89dcb9-q2qmn_calico-system(2793a114-a4b6-453d-99f4-97bf5b7e5a79)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"072bde01928cfe7b765cc3a3eb690dd399e75f6ff9574459f98100d524b80d95\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6cdf89dcb9-q2qmn" podUID="2793a114-a4b6-453d-99f4-97bf5b7e5a79" Jul 11 05:12:50.272586 containerd[1533]: time="2025-07-11T05:12:50.272520733Z" level=error msg="Failed to destroy network for sandbox \"4714ccc4aab61f0081ed94a65503726fd478e6549e27356922db954af713ee60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.272733 containerd[1533]: time="2025-07-11T05:12:50.272704874Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-252sd,Uid:ddb2fcba-eef7-4cc0-812c-e6293859c9a8,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3546661e707dca8c592f7cb1f9c409e936fc1c2508e33e2619eadac40e536577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.273031 kubelet[2671]: E0711 05:12:50.273001 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3546661e707dca8c592f7cb1f9c409e936fc1c2508e33e2619eadac40e536577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.273087 kubelet[2671]: E0711 05:12:50.273044 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3546661e707dca8c592f7cb1f9c409e936fc1c2508e33e2619eadac40e536577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-252sd" Jul 11 05:12:50.273087 kubelet[2671]: E0711 05:12:50.273061 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3546661e707dca8c592f7cb1f9c409e936fc1c2508e33e2619eadac40e536577\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-252sd" Jul 11 05:12:50.273207 kubelet[2671]: E0711 05:12:50.273091 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-252sd_kube-system(ddb2fcba-eef7-4cc0-812c-e6293859c9a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-252sd_kube-system(ddb2fcba-eef7-4cc0-812c-e6293859c9a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3546661e707dca8c592f7cb1f9c409e936fc1c2508e33e2619eadac40e536577\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-252sd" podUID="ddb2fcba-eef7-4cc0-812c-e6293859c9a8" Jul 11 05:12:50.273288 containerd[1533]: time="2025-07-11T05:12:50.273220536Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b75654db8-xnd5b,Uid:46dce721-fca6-4fb3-90cb-6ea1278e55d3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d01c74ff8cabf8d9b46a328f03cf974bd34262a1598b3b859498a90c4acef816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.273373 kubelet[2671]: E0711 05:12:50.273342 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d01c74ff8cabf8d9b46a328f03cf974bd34262a1598b3b859498a90c4acef816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.273426 kubelet[2671]: E0711 05:12:50.273370 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d01c74ff8cabf8d9b46a328f03cf974bd34262a1598b3b859498a90c4acef816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b75654db8-xnd5b" Jul 11 05:12:50.273426 kubelet[2671]: E0711 05:12:50.273385 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d01c74ff8cabf8d9b46a328f03cf974bd34262a1598b3b859498a90c4acef816\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b75654db8-xnd5b" Jul 11 05:12:50.273481 kubelet[2671]: E0711 05:12:50.273416 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b75654db8-xnd5b_calico-apiserver(46dce721-fca6-4fb3-90cb-6ea1278e55d3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b75654db8-xnd5b_calico-apiserver(46dce721-fca6-4fb3-90cb-6ea1278e55d3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d01c74ff8cabf8d9b46a328f03cf974bd34262a1598b3b859498a90c4acef816\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b75654db8-xnd5b" podUID="46dce721-fca6-4fb3-90cb-6ea1278e55d3" Jul 11 05:12:50.274333 containerd[1533]: time="2025-07-11T05:12:50.274292704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc56655d8-fzxgs,Uid:026e934c-f996-4488-b32e-961464e3c433,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6a3a36b950e9d4c0913c3bd0f968c00c24edcf53e45628caef016ea4ac1ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.274508 kubelet[2671]: E0711 05:12:50.274478 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6a3a36b950e9d4c0913c3bd0f968c00c24edcf53e45628caef016ea4ac1ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.274557 kubelet[2671]: E0711 05:12:50.274522 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6a3a36b950e9d4c0913c3bd0f968c00c24edcf53e45628caef016ea4ac1ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fc56655d8-fzxgs" Jul 11 05:12:50.274557 kubelet[2671]: E0711 05:12:50.274543 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b6a3a36b950e9d4c0913c3bd0f968c00c24edcf53e45628caef016ea4ac1ac5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5fc56655d8-fzxgs" Jul 11 05:12:50.274617 kubelet[2671]: E0711 05:12:50.274580 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5fc56655d8-fzxgs_calico-system(026e934c-f996-4488-b32e-961464e3c433)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5fc56655d8-fzxgs_calico-system(026e934c-f996-4488-b32e-961464e3c433)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b6a3a36b950e9d4c0913c3bd0f968c00c24edcf53e45628caef016ea4ac1ac5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5fc56655d8-fzxgs" podUID="026e934c-f996-4488-b32e-961464e3c433" Jul 11 05:12:50.274932 containerd[1533]: time="2025-07-11T05:12:50.274896976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mmj54,Uid:21ff705d-19fe-42bb-bec5-e77722b62149,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4714ccc4aab61f0081ed94a65503726fd478e6549e27356922db954af713ee60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.275145 kubelet[2671]: E0711 05:12:50.275081 2671 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4714ccc4aab61f0081ed94a65503726fd478e6549e27356922db954af713ee60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 11 05:12:50.275145 kubelet[2671]: E0711 05:12:50.275117 2671 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4714ccc4aab61f0081ed94a65503726fd478e6549e27356922db954af713ee60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mmj54" Jul 11 05:12:50.275145 kubelet[2671]: E0711 05:12:50.275133 2671 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4714ccc4aab61f0081ed94a65503726fd478e6549e27356922db954af713ee60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-mmj54" Jul 11 05:12:50.275689 kubelet[2671]: E0711 05:12:50.275165 2671 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-mmj54_kube-system(21ff705d-19fe-42bb-bec5-e77722b62149)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-mmj54_kube-system(21ff705d-19fe-42bb-bec5-e77722b62149)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4714ccc4aab61f0081ed94a65503726fd478e6549e27356922db954af713ee60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-mmj54" podUID="21ff705d-19fe-42bb-bec5-e77722b62149" Jul 11 05:12:53.666793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3816589437.mount: Deactivated successfully. Jul 11 05:12:53.918161 containerd[1533]: time="2025-07-11T05:12:53.918000314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=152544909" Jul 11 05:12:53.919021 containerd[1533]: time="2025-07-11T05:12:53.918964417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:53.920632 containerd[1533]: time="2025-07-11T05:12:53.920601591Z" level=info msg="ImageCreate event name:\"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:53.921396 containerd[1533]: time="2025-07-11T05:12:53.921336749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"152544771\" in 3.958116032s" Jul 11 05:12:53.921396 containerd[1533]: time="2025-07-11T05:12:53.921363552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:1c6ddca599ddd18c061e797a7830b0aea985f8b023c5e43d815a9ed1088893a9\"" Jul 11 05:12:53.921635 containerd[1533]: time="2025-07-11T05:12:53.921598857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:53.944552 containerd[1533]: time="2025-07-11T05:12:53.944454892Z" level=info msg="CreateContainer within sandbox \"ce017d15eaf6cc203570b46b2c4d7f8a25c3209d06c55c4b9f7a07a75c0a94c9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 11 05:12:53.973401 containerd[1533]: time="2025-07-11T05:12:53.972149723Z" level=info msg="Container 0524df5fa1efa5fff86d682c93868c9015587ed65c90210d9b36c0d2456bd0c0: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:12:53.987422 containerd[1533]: time="2025-07-11T05:12:53.987334421Z" level=info msg="CreateContainer within sandbox \"ce017d15eaf6cc203570b46b2c4d7f8a25c3209d06c55c4b9f7a07a75c0a94c9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0524df5fa1efa5fff86d682c93868c9015587ed65c90210d9b36c0d2456bd0c0\"" Jul 11 05:12:53.987943 containerd[1533]: time="2025-07-11T05:12:53.987905201Z" level=info msg="StartContainer for \"0524df5fa1efa5fff86d682c93868c9015587ed65c90210d9b36c0d2456bd0c0\"" Jul 11 05:12:53.989836 containerd[1533]: time="2025-07-11T05:12:53.989768600Z" level=info msg="connecting to shim 0524df5fa1efa5fff86d682c93868c9015587ed65c90210d9b36c0d2456bd0c0" address="unix:///run/containerd/s/21e11f04ffe6ee149eb4e5fb936f7112184af4e99ca78c7cb1774d2d1a2da889" protocol=ttrpc version=3 Jul 11 05:12:54.021129 systemd[1]: Started cri-containerd-0524df5fa1efa5fff86d682c93868c9015587ed65c90210d9b36c0d2456bd0c0.scope - libcontainer container 0524df5fa1efa5fff86d682c93868c9015587ed65c90210d9b36c0d2456bd0c0. Jul 11 05:12:54.069890 containerd[1533]: time="2025-07-11T05:12:54.069308023Z" level=info msg="StartContainer for \"0524df5fa1efa5fff86d682c93868c9015587ed65c90210d9b36c0d2456bd0c0\" returns successfully" Jul 11 05:12:54.264076 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 11 05:12:54.264168 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 11 05:12:54.424583 kubelet[2671]: I0711 05:12:54.424524 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2793a114-a4b6-453d-99f4-97bf5b7e5a79-whisker-ca-bundle\") pod \"2793a114-a4b6-453d-99f4-97bf5b7e5a79\" (UID: \"2793a114-a4b6-453d-99f4-97bf5b7e5a79\") " Jul 11 05:12:54.424583 kubelet[2671]: I0711 05:12:54.424591 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2793a114-a4b6-453d-99f4-97bf5b7e5a79-whisker-backend-key-pair\") pod \"2793a114-a4b6-453d-99f4-97bf5b7e5a79\" (UID: \"2793a114-a4b6-453d-99f4-97bf5b7e5a79\") " Jul 11 05:12:54.424583 kubelet[2671]: I0711 05:12:54.424612 2671 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tzcg\" (UniqueName: \"kubernetes.io/projected/2793a114-a4b6-453d-99f4-97bf5b7e5a79-kube-api-access-9tzcg\") pod \"2793a114-a4b6-453d-99f4-97bf5b7e5a79\" (UID: \"2793a114-a4b6-453d-99f4-97bf5b7e5a79\") " Jul 11 05:12:54.425546 kubelet[2671]: I0711 05:12:54.424873 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2793a114-a4b6-453d-99f4-97bf5b7e5a79-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "2793a114-a4b6-453d-99f4-97bf5b7e5a79" (UID: "2793a114-a4b6-453d-99f4-97bf5b7e5a79"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 11 05:12:54.425546 kubelet[2671]: I0711 05:12:54.425171 2671 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2793a114-a4b6-453d-99f4-97bf5b7e5a79-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 11 05:12:54.428148 kubelet[2671]: I0711 05:12:54.428103 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2793a114-a4b6-453d-99f4-97bf5b7e5a79-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "2793a114-a4b6-453d-99f4-97bf5b7e5a79" (UID: "2793a114-a4b6-453d-99f4-97bf5b7e5a79"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 11 05:12:54.428499 kubelet[2671]: I0711 05:12:54.428416 2671 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2793a114-a4b6-453d-99f4-97bf5b7e5a79-kube-api-access-9tzcg" (OuterVolumeSpecName: "kube-api-access-9tzcg") pod "2793a114-a4b6-453d-99f4-97bf5b7e5a79" (UID: "2793a114-a4b6-453d-99f4-97bf5b7e5a79"). InnerVolumeSpecName "kube-api-access-9tzcg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 11 05:12:54.526048 kubelet[2671]: I0711 05:12:54.525923 2671 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/2793a114-a4b6-453d-99f4-97bf5b7e5a79-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 11 05:12:54.526048 kubelet[2671]: I0711 05:12:54.525985 2671 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9tzcg\" (UniqueName: \"kubernetes.io/projected/2793a114-a4b6-453d-99f4-97bf5b7e5a79-kube-api-access-9tzcg\") on node \"localhost\" DevicePath \"\"" Jul 11 05:12:54.667684 systemd[1]: var-lib-kubelet-pods-2793a114\x2da4b6\x2d453d\x2d99f4\x2d97bf5b7e5a79-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9tzcg.mount: Deactivated successfully. Jul 11 05:12:54.667782 systemd[1]: var-lib-kubelet-pods-2793a114\x2da4b6\x2d453d\x2d99f4\x2d97bf5b7e5a79-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 11 05:12:54.980431 systemd[1]: Removed slice kubepods-besteffort-pod2793a114_a4b6_453d_99f4_97bf5b7e5a79.slice - libcontainer container kubepods-besteffort-pod2793a114_a4b6_453d_99f4_97bf5b7e5a79.slice. Jul 11 05:12:54.993578 kubelet[2671]: I0711 05:12:54.993516 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g4r5n" podStartSLOduration=2.278983857 podStartE2EDuration="13.993498848s" podCreationTimestamp="2025-07-11 05:12:41 +0000 UTC" firstStartedPulling="2025-07-11 05:12:42.207911394 +0000 UTC m=+18.466360633" lastFinishedPulling="2025-07-11 05:12:53.922426385 +0000 UTC m=+30.180875624" observedRunningTime="2025-07-11 05:12:54.993003317 +0000 UTC m=+31.251452516" watchObservedRunningTime="2025-07-11 05:12:54.993498848 +0000 UTC m=+31.251948087" Jul 11 05:12:55.041875 systemd[1]: Created slice kubepods-besteffort-pod44e87af1_c706_484a_a8c5_f1a251fd4fcc.slice - libcontainer container kubepods-besteffort-pod44e87af1_c706_484a_a8c5_f1a251fd4fcc.slice. Jul 11 05:12:55.131041 kubelet[2671]: I0711 05:12:55.130999 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9kjl\" (UniqueName: \"kubernetes.io/projected/44e87af1-c706-484a-a8c5-f1a251fd4fcc-kube-api-access-b9kjl\") pod \"whisker-66cd6888c9-6fnxl\" (UID: \"44e87af1-c706-484a-a8c5-f1a251fd4fcc\") " pod="calico-system/whisker-66cd6888c9-6fnxl" Jul 11 05:12:55.131303 kubelet[2671]: I0711 05:12:55.131197 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/44e87af1-c706-484a-a8c5-f1a251fd4fcc-whisker-backend-key-pair\") pod \"whisker-66cd6888c9-6fnxl\" (UID: \"44e87af1-c706-484a-a8c5-f1a251fd4fcc\") " pod="calico-system/whisker-66cd6888c9-6fnxl" Jul 11 05:12:55.131303 kubelet[2671]: I0711 05:12:55.131227 2671 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44e87af1-c706-484a-a8c5-f1a251fd4fcc-whisker-ca-bundle\") pod \"whisker-66cd6888c9-6fnxl\" (UID: \"44e87af1-c706-484a-a8c5-f1a251fd4fcc\") " pod="calico-system/whisker-66cd6888c9-6fnxl" Jul 11 05:12:55.131427 containerd[1533]: time="2025-07-11T05:12:55.130957060Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0524df5fa1efa5fff86d682c93868c9015587ed65c90210d9b36c0d2456bd0c0\" id:\"8fa1443d838a91cf2df25111364d6f4c242612033148dbc3961ea0f8806bbc0b\" pid:3814 exit_status:1 exited_at:{seconds:1752210775 nanos:130651790}" Jul 11 05:12:55.349032 containerd[1533]: time="2025-07-11T05:12:55.348937891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cd6888c9-6fnxl,Uid:44e87af1-c706-484a-a8c5-f1a251fd4fcc,Namespace:calico-system,Attempt:0,}" Jul 11 05:12:55.648652 systemd-networkd[1435]: calibb30e24df93: Link UP Jul 11 05:12:55.648899 systemd-networkd[1435]: calibb30e24df93: Gained carrier Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.407 [INFO][3830] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.454 [INFO][3830] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--66cd6888c9--6fnxl-eth0 whisker-66cd6888c9- calico-system 44e87af1-c706-484a-a8c5-f1a251fd4fcc 878 0 2025-07-11 05:12:55 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:66cd6888c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-66cd6888c9-6fnxl eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calibb30e24df93 [] [] }} ContainerID="353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" Namespace="calico-system" Pod="whisker-66cd6888c9-6fnxl" WorkloadEndpoint="localhost-k8s-whisker--66cd6888c9--6fnxl-" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.454 [INFO][3830] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" Namespace="calico-system" Pod="whisker-66cd6888c9-6fnxl" WorkloadEndpoint="localhost-k8s-whisker--66cd6888c9--6fnxl-eth0" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.569 [INFO][3844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" HandleID="k8s-pod-network.353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" Workload="localhost-k8s-whisker--66cd6888c9--6fnxl-eth0" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.569 [INFO][3844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" HandleID="k8s-pod-network.353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" Workload="localhost-k8s-whisker--66cd6888c9--6fnxl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c2780), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-66cd6888c9-6fnxl", "timestamp":"2025-07-11 05:12:55.569095259 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.569 [INFO][3844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.569 [INFO][3844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.569 [INFO][3844] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.590 [INFO][3844] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" host="localhost" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.603 [INFO][3844] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.607 [INFO][3844] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.609 [INFO][3844] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.611 [INFO][3844] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.613 [INFO][3844] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" host="localhost" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.617 [INFO][3844] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600 Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.620 [INFO][3844] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" host="localhost" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.625 [INFO][3844] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" host="localhost" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.626 [INFO][3844] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" host="localhost" Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.626 [INFO][3844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:12:55.668744 containerd[1533]: 2025-07-11 05:12:55.626 [INFO][3844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" HandleID="k8s-pod-network.353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" Workload="localhost-k8s-whisker--66cd6888c9--6fnxl-eth0" Jul 11 05:12:55.669476 containerd[1533]: 2025-07-11 05:12:55.631 [INFO][3830] cni-plugin/k8s.go 418: Populated endpoint ContainerID="353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" Namespace="calico-system" Pod="whisker-66cd6888c9-6fnxl" WorkloadEndpoint="localhost-k8s-whisker--66cd6888c9--6fnxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66cd6888c9--6fnxl-eth0", GenerateName:"whisker-66cd6888c9-", Namespace:"calico-system", SelfLink:"", UID:"44e87af1-c706-484a-a8c5-f1a251fd4fcc", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66cd6888c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-66cd6888c9-6fnxl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibb30e24df93", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:12:55.669476 containerd[1533]: 2025-07-11 05:12:55.631 [INFO][3830] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" Namespace="calico-system" Pod="whisker-66cd6888c9-6fnxl" WorkloadEndpoint="localhost-k8s-whisker--66cd6888c9--6fnxl-eth0" Jul 11 05:12:55.669476 containerd[1533]: 2025-07-11 05:12:55.633 [INFO][3830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb30e24df93 ContainerID="353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" Namespace="calico-system" Pod="whisker-66cd6888c9-6fnxl" WorkloadEndpoint="localhost-k8s-whisker--66cd6888c9--6fnxl-eth0" Jul 11 05:12:55.669476 containerd[1533]: 2025-07-11 05:12:55.650 [INFO][3830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" Namespace="calico-system" Pod="whisker-66cd6888c9-6fnxl" WorkloadEndpoint="localhost-k8s-whisker--66cd6888c9--6fnxl-eth0" Jul 11 05:12:55.669476 containerd[1533]: 2025-07-11 05:12:55.650 [INFO][3830] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" Namespace="calico-system" Pod="whisker-66cd6888c9-6fnxl" WorkloadEndpoint="localhost-k8s-whisker--66cd6888c9--6fnxl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--66cd6888c9--6fnxl-eth0", GenerateName:"whisker-66cd6888c9-", Namespace:"calico-system", SelfLink:"", UID:"44e87af1-c706-484a-a8c5-f1a251fd4fcc", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"66cd6888c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600", Pod:"whisker-66cd6888c9-6fnxl", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calibb30e24df93", MAC:"aa:42:e2:a0:af:ea", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:12:55.669476 containerd[1533]: 2025-07-11 05:12:55.663 [INFO][3830] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" Namespace="calico-system" Pod="whisker-66cd6888c9-6fnxl" WorkloadEndpoint="localhost-k8s-whisker--66cd6888c9--6fnxl-eth0" Jul 11 05:12:55.764161 containerd[1533]: time="2025-07-11T05:12:55.764066843Z" level=info msg="connecting to shim 353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600" address="unix:///run/containerd/s/c99310149f0be21deb707bebfbcd01e5c62ffff62b709777134213faaeb403d2" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:12:55.796132 systemd[1]: Started cri-containerd-353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600.scope - libcontainer container 353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600. Jul 11 05:12:55.807800 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:12:55.819879 kubelet[2671]: I0711 05:12:55.819828 2671 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2793a114-a4b6-453d-99f4-97bf5b7e5a79" path="/var/lib/kubelet/pods/2793a114-a4b6-453d-99f4-97bf5b7e5a79/volumes" Jul 11 05:12:55.826988 containerd[1533]: time="2025-07-11T05:12:55.826946855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-66cd6888c9-6fnxl,Uid:44e87af1-c706-484a-a8c5-f1a251fd4fcc,Namespace:calico-system,Attempt:0,} returns sandbox id \"353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600\"" Jul 11 05:12:55.828632 containerd[1533]: time="2025-07-11T05:12:55.828600139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 11 05:12:56.046059 containerd[1533]: time="2025-07-11T05:12:56.045951603Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0524df5fa1efa5fff86d682c93868c9015587ed65c90210d9b36c0d2456bd0c0\" id:\"76204ce197ab6db24cdd6dbf0ca967a8e1c4026424cde877952435f15133f85a\" pid:4020 exit_status:1 exited_at:{seconds:1752210776 nanos:45706299}" Jul 11 05:12:56.945747 containerd[1533]: time="2025-07-11T05:12:56.945701591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:56.946317 containerd[1533]: time="2025-07-11T05:12:56.946281606Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4605614" Jul 11 05:12:56.947119 containerd[1533]: time="2025-07-11T05:12:56.947080763Z" level=info msg="ImageCreate event name:\"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:56.951349 containerd[1533]: time="2025-07-11T05:12:56.951308130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:56.952622 containerd[1533]: time="2025-07-11T05:12:56.952579612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"5974847\" in 1.123888384s" Jul 11 05:12:56.952622 containerd[1533]: time="2025-07-11T05:12:56.952613135Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:309942601a9ca6c4e92bcd09162824fef1c137a5c5d92fbbb45be0f29bfd1817\"" Jul 11 05:12:56.954626 containerd[1533]: time="2025-07-11T05:12:56.954590726Z" level=info msg="CreateContainer within sandbox \"353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 11 05:12:56.959995 containerd[1533]: time="2025-07-11T05:12:56.959424471Z" level=info msg="Container 90d48d0b63adfb8b0618386aa4720b382d6cd07bcb017d70901ee8fa8ad5bb61: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:12:56.966271 containerd[1533]: time="2025-07-11T05:12:56.966238646Z" level=info msg="CreateContainer within sandbox \"353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"90d48d0b63adfb8b0618386aa4720b382d6cd07bcb017d70901ee8fa8ad5bb61\"" Jul 11 05:12:56.966901 containerd[1533]: time="2025-07-11T05:12:56.966792899Z" level=info msg="StartContainer for \"90d48d0b63adfb8b0618386aa4720b382d6cd07bcb017d70901ee8fa8ad5bb61\"" Jul 11 05:12:56.968569 containerd[1533]: time="2025-07-11T05:12:56.968531787Z" level=info msg="connecting to shim 90d48d0b63adfb8b0618386aa4720b382d6cd07bcb017d70901ee8fa8ad5bb61" address="unix:///run/containerd/s/c99310149f0be21deb707bebfbcd01e5c62ffff62b709777134213faaeb403d2" protocol=ttrpc version=3 Jul 11 05:12:56.987144 systemd[1]: Started cri-containerd-90d48d0b63adfb8b0618386aa4720b382d6cd07bcb017d70901ee8fa8ad5bb61.scope - libcontainer container 90d48d0b63adfb8b0618386aa4720b382d6cd07bcb017d70901ee8fa8ad5bb61. Jul 11 05:12:57.030045 containerd[1533]: time="2025-07-11T05:12:57.029939646Z" level=info msg="StartContainer for \"90d48d0b63adfb8b0618386aa4720b382d6cd07bcb017d70901ee8fa8ad5bb61\" returns successfully" Jul 11 05:12:57.031587 containerd[1533]: time="2025-07-11T05:12:57.031514752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 11 05:12:57.058339 containerd[1533]: time="2025-07-11T05:12:57.058299448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0524df5fa1efa5fff86d682c93868c9015587ed65c90210d9b36c0d2456bd0c0\" id:\"e1aa2fad08d3347117515d90a2164723afd160d1e12c0d3c54a4f8c1a7c422df\" pid:4094 exit_status:1 exited_at:{seconds:1752210777 nanos:57959896}" Jul 11 05:12:57.440108 systemd-networkd[1435]: calibb30e24df93: Gained IPv6LL Jul 11 05:12:58.694363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3073019739.mount: Deactivated successfully. Jul 11 05:12:58.707307 containerd[1533]: time="2025-07-11T05:12:58.707271204Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:58.707770 containerd[1533]: time="2025-07-11T05:12:58.707738686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=30814581" Jul 11 05:12:58.708728 containerd[1533]: time="2025-07-11T05:12:58.708664450Z" level=info msg="ImageCreate event name:\"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:58.710611 containerd[1533]: time="2025-07-11T05:12:58.710541659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:12:58.711670 containerd[1533]: time="2025-07-11T05:12:58.711624237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"30814411\" in 1.680069801s" Jul 11 05:12:58.711670 containerd[1533]: time="2025-07-11T05:12:58.711661160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:8763d908c0cd23d0e87bc61ce1ba8371b86449688baf955e5eeff7f7d7e101c4\"" Jul 11 05:12:58.714486 containerd[1533]: time="2025-07-11T05:12:58.714450252Z" level=info msg="CreateContainer within sandbox \"353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 11 05:12:58.737045 containerd[1533]: time="2025-07-11T05:12:58.736298626Z" level=info msg="Container 84c51892540357175d05b857c64154b5daff004d078ef4ca10d14899d93a7e02: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:12:58.744240 containerd[1533]: time="2025-07-11T05:12:58.744184298Z" level=info msg="CreateContainer within sandbox \"353ea9bf83ed9ab72a2f11bd0eb1a0723fbcba8332f20ed43ef3a0ea64657600\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"84c51892540357175d05b857c64154b5daff004d078ef4ca10d14899d93a7e02\"" Jul 11 05:12:58.744800 containerd[1533]: time="2025-07-11T05:12:58.744774591Z" level=info msg="StartContainer for \"84c51892540357175d05b857c64154b5daff004d078ef4ca10d14899d93a7e02\"" Jul 11 05:12:58.745840 containerd[1533]: time="2025-07-11T05:12:58.745760641Z" level=info msg="connecting to shim 84c51892540357175d05b857c64154b5daff004d078ef4ca10d14899d93a7e02" address="unix:///run/containerd/s/c99310149f0be21deb707bebfbcd01e5c62ffff62b709777134213faaeb403d2" protocol=ttrpc version=3 Jul 11 05:12:58.786120 systemd[1]: Started cri-containerd-84c51892540357175d05b857c64154b5daff004d078ef4ca10d14899d93a7e02.scope - libcontainer container 84c51892540357175d05b857c64154b5daff004d078ef4ca10d14899d93a7e02. Jul 11 05:12:58.828395 containerd[1533]: time="2025-07-11T05:12:58.827160593Z" level=info msg="StartContainer for \"84c51892540357175d05b857c64154b5daff004d078ef4ca10d14899d93a7e02\" returns successfully" Jul 11 05:12:59.007455 kubelet[2671]: I0711 05:12:59.007297 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-66cd6888c9-6fnxl" podStartSLOduration=1.122853296 podStartE2EDuration="4.007281886s" podCreationTimestamp="2025-07-11 05:12:55 +0000 UTC" firstStartedPulling="2025-07-11 05:12:55.827961916 +0000 UTC m=+32.086411115" lastFinishedPulling="2025-07-11 05:12:58.712390466 +0000 UTC m=+34.970839705" observedRunningTime="2025-07-11 05:12:59.006428971 +0000 UTC m=+35.264878250" watchObservedRunningTime="2025-07-11 05:12:59.007281886 +0000 UTC m=+35.265731125" Jul 11 05:12:59.748931 kubelet[2671]: I0711 05:12:59.748887 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 05:13:00.817468 containerd[1533]: time="2025-07-11T05:13:00.817192276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b75654db8-h4sk4,Uid:4be52158-ba8a-4d65-84d5-8d5850aa0e12,Namespace:calico-apiserver,Attempt:0,}" Jul 11 05:13:00.940716 systemd-networkd[1435]: calic84130dedce: Link UP Jul 11 05:13:00.941363 systemd-networkd[1435]: calic84130dedce: Gained carrier Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.857 [INFO][4275] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.870 [INFO][4275] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0 calico-apiserver-6b75654db8- calico-apiserver 4be52158-ba8a-4d65-84d5-8d5850aa0e12 818 0 2025-07-11 05:12:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b75654db8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b75654db8-h4sk4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic84130dedce [] [] }} ContainerID="c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-h4sk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--h4sk4-" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.870 [INFO][4275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-h4sk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.898 [INFO][4296] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" HandleID="k8s-pod-network.c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" Workload="localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.898 [INFO][4296] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" HandleID="k8s-pod-network.c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" Workload="localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b75654db8-h4sk4", "timestamp":"2025-07-11 05:13:00.898321306 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.898 [INFO][4296] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.898 [INFO][4296] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.898 [INFO][4296] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.909 [INFO][4296] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" host="localhost" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.914 [INFO][4296] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.918 [INFO][4296] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.920 [INFO][4296] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.922 [INFO][4296] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.922 [INFO][4296] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" host="localhost" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.924 [INFO][4296] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.928 [INFO][4296] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" host="localhost" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.934 [INFO][4296] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" host="localhost" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.935 [INFO][4296] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" host="localhost" Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.935 [INFO][4296] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:13:00.959499 containerd[1533]: 2025-07-11 05:13:00.935 [INFO][4296] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" HandleID="k8s-pod-network.c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" Workload="localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0" Jul 11 05:13:00.961082 containerd[1533]: 2025-07-11 05:13:00.938 [INFO][4275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-h4sk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0", GenerateName:"calico-apiserver-6b75654db8-", Namespace:"calico-apiserver", SelfLink:"", UID:"4be52158-ba8a-4d65-84d5-8d5850aa0e12", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b75654db8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b75654db8-h4sk4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic84130dedce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:00.961082 containerd[1533]: 2025-07-11 05:13:00.938 [INFO][4275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-h4sk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0" Jul 11 05:13:00.961082 containerd[1533]: 2025-07-11 05:13:00.938 [INFO][4275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic84130dedce ContainerID="c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-h4sk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0" Jul 11 05:13:00.961082 containerd[1533]: 2025-07-11 05:13:00.941 [INFO][4275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-h4sk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0" Jul 11 05:13:00.961082 containerd[1533]: 2025-07-11 05:13:00.943 [INFO][4275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-h4sk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0", GenerateName:"calico-apiserver-6b75654db8-", Namespace:"calico-apiserver", SelfLink:"", UID:"4be52158-ba8a-4d65-84d5-8d5850aa0e12", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b75654db8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a", Pod:"calico-apiserver-6b75654db8-h4sk4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic84130dedce", MAC:"aa:15:d0:11:74:15", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:00.961082 containerd[1533]: 2025-07-11 05:13:00.952 [INFO][4275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-h4sk4" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--h4sk4-eth0" Jul 11 05:13:01.048165 containerd[1533]: time="2025-07-11T05:13:01.048118034Z" level=info msg="connecting to shim c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a" address="unix:///run/containerd/s/f2f6ce53c67646c53441e6d8bccd509a120260c6f3d287ffc4325c97ef171ae0" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:13:01.074126 systemd[1]: Started cri-containerd-c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a.scope - libcontainer container c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a. Jul 11 05:13:01.088268 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:13:01.119413 containerd[1533]: time="2025-07-11T05:13:01.119334013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b75654db8-h4sk4,Uid:4be52158-ba8a-4d65-84d5-8d5850aa0e12,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a\"" Jul 11 05:13:01.122135 containerd[1533]: time="2025-07-11T05:13:01.122001354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 11 05:13:01.152906 systemd-networkd[1435]: vxlan.calico: Link UP Jul 11 05:13:01.152912 systemd-networkd[1435]: vxlan.calico: Gained carrier Jul 11 05:13:01.817843 containerd[1533]: time="2025-07-11T05:13:01.817782988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-45rdt,Uid:4a97364c-368b-41d9-83d8-fb76bf76e4bb,Namespace:calico-system,Attempt:0,}" Jul 11 05:13:01.907649 systemd-networkd[1435]: cali185c0036b39: Link UP Jul 11 05:13:01.908306 systemd-networkd[1435]: cali185c0036b39: Gained carrier Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.849 [INFO][4446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--45rdt-eth0 goldmane-768f4c5c69- calico-system 4a97364c-368b-41d9-83d8-fb76bf76e4bb 817 0 2025-07-11 05:12:41 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-45rdt eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali185c0036b39 [] [] }} ContainerID="053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" Namespace="calico-system" Pod="goldmane-768f4c5c69-45rdt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--45rdt-" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.849 [INFO][4446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" Namespace="calico-system" Pod="goldmane-768f4c5c69-45rdt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--45rdt-eth0" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.871 [INFO][4461] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" HandleID="k8s-pod-network.053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" Workload="localhost-k8s-goldmane--768f4c5c69--45rdt-eth0" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.872 [INFO][4461] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" HandleID="k8s-pod-network.053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" Workload="localhost-k8s-goldmane--768f4c5c69--45rdt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-45rdt", "timestamp":"2025-07-11 05:13:01.871848426 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.872 [INFO][4461] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.872 [INFO][4461] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.872 [INFO][4461] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.881 [INFO][4461] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" host="localhost" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.885 [INFO][4461] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.889 [INFO][4461] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.891 [INFO][4461] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.893 [INFO][4461] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.893 [INFO][4461] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" host="localhost" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.895 [INFO][4461] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.898 [INFO][4461] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" host="localhost" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.903 [INFO][4461] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" host="localhost" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.903 [INFO][4461] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" host="localhost" Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.903 [INFO][4461] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:13:01.919799 containerd[1533]: 2025-07-11 05:13:01.903 [INFO][4461] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" HandleID="k8s-pod-network.053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" Workload="localhost-k8s-goldmane--768f4c5c69--45rdt-eth0" Jul 11 05:13:01.920775 containerd[1533]: 2025-07-11 05:13:01.905 [INFO][4446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" Namespace="calico-system" Pod="goldmane-768f4c5c69-45rdt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--45rdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--45rdt-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"4a97364c-368b-41d9-83d8-fb76bf76e4bb", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-45rdt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali185c0036b39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:01.920775 containerd[1533]: 2025-07-11 05:13:01.905 [INFO][4446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" Namespace="calico-system" Pod="goldmane-768f4c5c69-45rdt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--45rdt-eth0" Jul 11 05:13:01.920775 containerd[1533]: 2025-07-11 05:13:01.905 [INFO][4446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali185c0036b39 ContainerID="053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" Namespace="calico-system" Pod="goldmane-768f4c5c69-45rdt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--45rdt-eth0" Jul 11 05:13:01.920775 containerd[1533]: 2025-07-11 05:13:01.908 [INFO][4446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" Namespace="calico-system" Pod="goldmane-768f4c5c69-45rdt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--45rdt-eth0" Jul 11 05:13:01.920775 containerd[1533]: 2025-07-11 05:13:01.909 [INFO][4446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" Namespace="calico-system" Pod="goldmane-768f4c5c69-45rdt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--45rdt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--45rdt-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"4a97364c-368b-41d9-83d8-fb76bf76e4bb", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c", Pod:"goldmane-768f4c5c69-45rdt", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali185c0036b39", MAC:"8e:ec:da:96:d2:f7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:01.920775 containerd[1533]: 2025-07-11 05:13:01.917 [INFO][4446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" Namespace="calico-system" Pod="goldmane-768f4c5c69-45rdt" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--45rdt-eth0" Jul 11 05:13:01.948441 containerd[1533]: time="2025-07-11T05:13:01.948393286Z" level=info msg="connecting to shim 053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c" address="unix:///run/containerd/s/79fe01ba9af25eb4c9f5c9f2ad877d27b26abcfc77c06063ceaec95792bc5c10" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:13:01.966345 systemd[1]: Started cri-containerd-053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c.scope - libcontainer container 053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c. Jul 11 05:13:01.981074 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:13:02.009336 containerd[1533]: time="2025-07-11T05:13:02.009296073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-45rdt,Uid:4a97364c-368b-41d9-83d8-fb76bf76e4bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c\"" Jul 11 05:13:02.304188 systemd-networkd[1435]: vxlan.calico: Gained IPv6LL Jul 11 05:13:02.496175 systemd-networkd[1435]: calic84130dedce: Gained IPv6LL Jul 11 05:13:02.817492 containerd[1533]: time="2025-07-11T05:13:02.816961763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc56655d8-fzxgs,Uid:026e934c-f996-4488-b32e-961464e3c433,Namespace:calico-system,Attempt:0,}" Jul 11 05:13:02.818441 containerd[1533]: time="2025-07-11T05:13:02.817570532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-252sd,Uid:ddb2fcba-eef7-4cc0-812c-e6293859c9a8,Namespace:kube-system,Attempt:0,}" Jul 11 05:13:02.837090 containerd[1533]: time="2025-07-11T05:13:02.837042543Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:02.837524 containerd[1533]: time="2025-07-11T05:13:02.837493539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=44517149" Jul 11 05:13:02.838300 containerd[1533]: time="2025-07-11T05:13:02.838260961Z" level=info msg="ImageCreate event name:\"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:02.843260 containerd[1533]: time="2025-07-11T05:13:02.843217681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"45886406\" in 1.721108518s" Jul 11 05:13:02.843260 containerd[1533]: time="2025-07-11T05:13:02.843256484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:3371ea1b18040228ef58c964e49b96f4291def748753dfbc0aef87a55f906b8f\"" Jul 11 05:13:02.850647 containerd[1533]: time="2025-07-11T05:13:02.850614077Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:02.851156 containerd[1533]: time="2025-07-11T05:13:02.851120998Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 11 05:13:02.869099 containerd[1533]: time="2025-07-11T05:13:02.869057444Z" level=info msg="CreateContainer within sandbox \"c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 05:13:02.877795 containerd[1533]: time="2025-07-11T05:13:02.877749465Z" level=info msg="Container 225fa9aaa12c8542230da309d94c800b8a4922e983a6c596aa3e19e0d88eb1ef: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:13:02.884542 containerd[1533]: time="2025-07-11T05:13:02.884421963Z" level=info msg="CreateContainer within sandbox \"c49912a0533a52cb7f8024f29fbb1991a927e21b888172013c280c087bad591a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"225fa9aaa12c8542230da309d94c800b8a4922e983a6c596aa3e19e0d88eb1ef\"" Jul 11 05:13:02.886344 containerd[1533]: time="2025-07-11T05:13:02.886312956Z" level=info msg="StartContainer for \"225fa9aaa12c8542230da309d94c800b8a4922e983a6c596aa3e19e0d88eb1ef\"" Jul 11 05:13:02.887986 containerd[1533]: time="2025-07-11T05:13:02.887943207Z" level=info msg="connecting to shim 225fa9aaa12c8542230da309d94c800b8a4922e983a6c596aa3e19e0d88eb1ef" address="unix:///run/containerd/s/f2f6ce53c67646c53441e6d8bccd509a120260c6f3d287ffc4325c97ef171ae0" protocol=ttrpc version=3 Jul 11 05:13:02.918156 systemd[1]: Started cri-containerd-225fa9aaa12c8542230da309d94c800b8a4922e983a6c596aa3e19e0d88eb1ef.scope - libcontainer container 225fa9aaa12c8542230da309d94c800b8a4922e983a6c596aa3e19e0d88eb1ef. Jul 11 05:13:02.933584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1755170958.mount: Deactivated successfully. Jul 11 05:13:02.946466 systemd-networkd[1435]: calic87cd705fa0: Link UP Jul 11 05:13:02.946948 systemd-networkd[1435]: calic87cd705fa0: Gained carrier Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.859 [INFO][4534] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0 calico-kube-controllers-5fc56655d8- calico-system 026e934c-f996-4488-b32e-961464e3c433 815 0 2025-07-11 05:12:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5fc56655d8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5fc56655d8-fzxgs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic87cd705fa0 [] [] }} ContainerID="7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" Namespace="calico-system" Pod="calico-kube-controllers-5fc56655d8-fzxgs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.859 [INFO][4534] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" Namespace="calico-system" Pod="calico-kube-controllers-5fc56655d8-fzxgs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.896 [INFO][4565] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" HandleID="k8s-pod-network.7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" Workload="localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.897 [INFO][4565] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" HandleID="k8s-pod-network.7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" Workload="localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136e70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5fc56655d8-fzxgs", "timestamp":"2025-07-11 05:13:02.896792681 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.897 [INFO][4565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.897 [INFO][4565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.897 [INFO][4565] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.908 [INFO][4565] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" host="localhost" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.913 [INFO][4565] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.918 [INFO][4565] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.923 [INFO][4565] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.926 [INFO][4565] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.926 [INFO][4565] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" host="localhost" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.928 [INFO][4565] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126 Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.931 [INFO][4565] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" host="localhost" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.938 [INFO][4565] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" host="localhost" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.938 [INFO][4565] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" host="localhost" Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.938 [INFO][4565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:13:02.961575 containerd[1533]: 2025-07-11 05:13:02.938 [INFO][4565] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" HandleID="k8s-pod-network.7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" Workload="localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0" Jul 11 05:13:02.962578 containerd[1533]: 2025-07-11 05:13:02.941 [INFO][4534] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" Namespace="calico-system" Pod="calico-kube-controllers-5fc56655d8-fzxgs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0", GenerateName:"calico-kube-controllers-5fc56655d8-", Namespace:"calico-system", SelfLink:"", UID:"026e934c-f996-4488-b32e-961464e3c433", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fc56655d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5fc56655d8-fzxgs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic87cd705fa0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:02.962578 containerd[1533]: 2025-07-11 05:13:02.942 [INFO][4534] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" Namespace="calico-system" Pod="calico-kube-controllers-5fc56655d8-fzxgs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0" Jul 11 05:13:02.962578 containerd[1533]: 2025-07-11 05:13:02.942 [INFO][4534] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic87cd705fa0 ContainerID="7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" Namespace="calico-system" Pod="calico-kube-controllers-5fc56655d8-fzxgs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0" Jul 11 05:13:02.962578 containerd[1533]: 2025-07-11 05:13:02.947 [INFO][4534] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" Namespace="calico-system" Pod="calico-kube-controllers-5fc56655d8-fzxgs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0" Jul 11 05:13:02.962578 containerd[1533]: 2025-07-11 05:13:02.948 [INFO][4534] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" Namespace="calico-system" Pod="calico-kube-controllers-5fc56655d8-fzxgs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0", GenerateName:"calico-kube-controllers-5fc56655d8-", Namespace:"calico-system", SelfLink:"", UID:"026e934c-f996-4488-b32e-961464e3c433", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5fc56655d8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126", Pod:"calico-kube-controllers-5fc56655d8-fzxgs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic87cd705fa0", MAC:"b6:e0:07:1c:d3:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:02.962578 containerd[1533]: 2025-07-11 05:13:02.959 [INFO][4534] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" Namespace="calico-system" Pod="calico-kube-controllers-5fc56655d8-fzxgs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5fc56655d8--fzxgs-eth0" Jul 11 05:13:03.036557 containerd[1533]: time="2025-07-11T05:13:03.036515437Z" level=info msg="StartContainer for \"225fa9aaa12c8542230da309d94c800b8a4922e983a6c596aa3e19e0d88eb1ef\" returns successfully" Jul 11 05:13:03.071228 kubelet[2671]: I0711 05:13:03.069232 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b75654db8-h4sk4" podStartSLOduration=21.339413226 podStartE2EDuration="23.069215967s" podCreationTimestamp="2025-07-11 05:12:40 +0000 UTC" firstStartedPulling="2025-07-11 05:13:01.121084958 +0000 UTC m=+37.379534197" lastFinishedPulling="2025-07-11 05:13:02.850887699 +0000 UTC m=+39.109336938" observedRunningTime="2025-07-11 05:13:03.068848258 +0000 UTC m=+39.327297537" watchObservedRunningTime="2025-07-11 05:13:03.069215967 +0000 UTC m=+39.327665206" Jul 11 05:13:03.078487 containerd[1533]: time="2025-07-11T05:13:03.078429931Z" level=info msg="connecting to shim 7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126" address="unix:///run/containerd/s/0dda6adc01a2d4bda2e9367d977eeadea1cb6b40e76860b8aa1ee0f344030109" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:13:03.079657 systemd[1]: Started sshd@7-10.0.0.147:22-10.0.0.1:53396.service - OpenSSH per-connection server daemon (10.0.0.1:53396). Jul 11 05:13:03.098310 systemd-networkd[1435]: calia4e883b4f2f: Link UP Jul 11 05:13:03.099341 systemd-networkd[1435]: calia4e883b4f2f: Gained carrier Jul 11 05:13:03.114224 systemd[1]: Started cri-containerd-7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126.scope - libcontainer container 7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126. Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:02.869 [INFO][4539] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--252sd-eth0 coredns-668d6bf9bc- kube-system ddb2fcba-eef7-4cc0-812c-e6293859c9a8 811 0 2025-07-11 05:12:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-252sd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia4e883b4f2f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" Namespace="kube-system" Pod="coredns-668d6bf9bc-252sd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--252sd-" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:02.869 [INFO][4539] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" Namespace="kube-system" Pod="coredns-668d6bf9bc-252sd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--252sd-eth0" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:02.900 [INFO][4571] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" HandleID="k8s-pod-network.54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" Workload="localhost-k8s-coredns--668d6bf9bc--252sd-eth0" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:02.900 [INFO][4571] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" HandleID="k8s-pod-network.54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" Workload="localhost-k8s-coredns--668d6bf9bc--252sd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b090), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-252sd", "timestamp":"2025-07-11 05:13:02.900508901 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:02.900 [INFO][4571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:02.938 [INFO][4571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:02.938 [INFO][4571] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.020 [INFO][4571] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" host="localhost" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.025 [INFO][4571] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.031 [INFO][4571] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.033 [INFO][4571] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.037 [INFO][4571] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.037 [INFO][4571] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" host="localhost" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.040 [INFO][4571] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46 Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.069 [INFO][4571] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" host="localhost" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.088 [INFO][4571] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" host="localhost" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.088 [INFO][4571] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" host="localhost" Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.090 [INFO][4571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:13:03.121448 containerd[1533]: 2025-07-11 05:13:03.090 [INFO][4571] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" HandleID="k8s-pod-network.54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" Workload="localhost-k8s-coredns--668d6bf9bc--252sd-eth0" Jul 11 05:13:03.122060 containerd[1533]: 2025-07-11 05:13:03.094 [INFO][4539] cni-plugin/k8s.go 418: Populated endpoint ContainerID="54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" Namespace="kube-system" Pod="coredns-668d6bf9bc-252sd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--252sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--252sd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ddb2fcba-eef7-4cc0-812c-e6293859c9a8", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-252sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia4e883b4f2f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:03.122060 containerd[1533]: 2025-07-11 05:13:03.094 [INFO][4539] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" Namespace="kube-system" Pod="coredns-668d6bf9bc-252sd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--252sd-eth0" Jul 11 05:13:03.122060 containerd[1533]: 2025-07-11 05:13:03.094 [INFO][4539] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4e883b4f2f ContainerID="54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" Namespace="kube-system" Pod="coredns-668d6bf9bc-252sd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--252sd-eth0" Jul 11 05:13:03.122060 containerd[1533]: 2025-07-11 05:13:03.101 [INFO][4539] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" Namespace="kube-system" Pod="coredns-668d6bf9bc-252sd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--252sd-eth0" Jul 11 05:13:03.122060 containerd[1533]: 2025-07-11 05:13:03.102 [INFO][4539] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" Namespace="kube-system" Pod="coredns-668d6bf9bc-252sd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--252sd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--252sd-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"ddb2fcba-eef7-4cc0-812c-e6293859c9a8", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46", Pod:"coredns-668d6bf9bc-252sd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia4e883b4f2f", MAC:"f2:75:74:77:a7:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:03.122275 containerd[1533]: 2025-07-11 05:13:03.115 [INFO][4539] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" Namespace="kube-system" Pod="coredns-668d6bf9bc-252sd" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--252sd-eth0" Jul 11 05:13:03.135633 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:13:03.136486 systemd-networkd[1435]: cali185c0036b39: Gained IPv6LL Jul 11 05:13:03.161313 sshd[4641]: Accepted publickey for core from 10.0.0.1 port 53396 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:03.164100 sshd-session[4641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:03.164950 containerd[1533]: time="2025-07-11T05:13:03.164906007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5fc56655d8-fzxgs,Uid:026e934c-f996-4488-b32e-961464e3c433,Namespace:calico-system,Attempt:0,} returns sandbox id \"7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126\"" Jul 11 05:13:03.167670 containerd[1533]: time="2025-07-11T05:13:03.167627941Z" level=info msg="connecting to shim 54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46" address="unix:///run/containerd/s/f4a8ea26edafcbcb09377e4247b59eba651d8d341a170bd1ec53426cf80d9ab2" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:13:03.170223 systemd-logind[1508]: New session 8 of user core. Jul 11 05:13:03.177142 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 11 05:13:03.194107 systemd[1]: Started cri-containerd-54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46.scope - libcontainer container 54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46. Jul 11 05:13:03.205839 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:13:03.250742 containerd[1533]: time="2025-07-11T05:13:03.250697829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-252sd,Uid:ddb2fcba-eef7-4cc0-812c-e6293859c9a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46\"" Jul 11 05:13:03.256429 containerd[1533]: time="2025-07-11T05:13:03.255831192Z" level=info msg="CreateContainer within sandbox \"54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 05:13:03.270989 containerd[1533]: time="2025-07-11T05:13:03.269813051Z" level=info msg="Container ea5e68ae02f7c1309ec518ac2171789844318c8ddf6bbec3b38871e1f2a9b97e: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:13:03.278671 containerd[1533]: time="2025-07-11T05:13:03.278634464Z" level=info msg="CreateContainer within sandbox \"54c5ea03202992f5b68ca957ef2acbb56c6240b592148f15fac4b16575991e46\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ea5e68ae02f7c1309ec518ac2171789844318c8ddf6bbec3b38871e1f2a9b97e\"" Jul 11 05:13:03.280358 containerd[1533]: time="2025-07-11T05:13:03.280336238Z" level=info msg="StartContainer for \"ea5e68ae02f7c1309ec518ac2171789844318c8ddf6bbec3b38871e1f2a9b97e\"" Jul 11 05:13:03.282362 containerd[1533]: time="2025-07-11T05:13:03.282223706Z" level=info msg="connecting to shim ea5e68ae02f7c1309ec518ac2171789844318c8ddf6bbec3b38871e1f2a9b97e" address="unix:///run/containerd/s/f4a8ea26edafcbcb09377e4247b59eba651d8d341a170bd1ec53426cf80d9ab2" protocol=ttrpc version=3 Jul 11 05:13:03.303133 systemd[1]: Started cri-containerd-ea5e68ae02f7c1309ec518ac2171789844318c8ddf6bbec3b38871e1f2a9b97e.scope - libcontainer container ea5e68ae02f7c1309ec518ac2171789844318c8ddf6bbec3b38871e1f2a9b97e. Jul 11 05:13:03.335497 containerd[1533]: time="2025-07-11T05:13:03.335370483Z" level=info msg="StartContainer for \"ea5e68ae02f7c1309ec518ac2171789844318c8ddf6bbec3b38871e1f2a9b97e\" returns successfully" Jul 11 05:13:03.529437 sshd[4718]: Connection closed by 10.0.0.1 port 53396 Jul 11 05:13:03.530201 sshd-session[4641]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:03.534579 systemd-logind[1508]: Session 8 logged out. Waiting for processes to exit. Jul 11 05:13:03.534935 systemd[1]: sshd@7-10.0.0.147:22-10.0.0.1:53396.service: Deactivated successfully. Jul 11 05:13:03.538379 systemd[1]: session-8.scope: Deactivated successfully. Jul 11 05:13:03.539911 systemd-logind[1508]: Removed session 8. Jul 11 05:13:04.049998 kubelet[2671]: I0711 05:13:04.049753 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 05:13:04.079530 kubelet[2671]: I0711 05:13:04.079270 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-252sd" podStartSLOduration=34.07925631 podStartE2EDuration="34.07925631s" podCreationTimestamp="2025-07-11 05:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:13:04.078883682 +0000 UTC m=+40.337332921" watchObservedRunningTime="2025-07-11 05:13:04.07925631 +0000 UTC m=+40.337705549" Jul 11 05:13:04.096366 systemd-networkd[1435]: calic87cd705fa0: Gained IPv6LL Jul 11 05:13:04.537952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2359120642.mount: Deactivated successfully. Jul 11 05:13:04.817878 containerd[1533]: time="2025-07-11T05:13:04.817746962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b75654db8-xnd5b,Uid:46dce721-fca6-4fb3-90cb-6ea1278e55d3,Namespace:calico-apiserver,Attempt:0,}" Jul 11 05:13:04.818477 containerd[1533]: time="2025-07-11T05:13:04.818446576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mmj54,Uid:21ff705d-19fe-42bb-bec5-e77722b62149,Namespace:kube-system,Attempt:0,}" Jul 11 05:13:04.866904 systemd-networkd[1435]: calia4e883b4f2f: Gained IPv6LL Jul 11 05:13:04.953993 containerd[1533]: time="2025-07-11T05:13:04.953780630Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:04.954457 containerd[1533]: time="2025-07-11T05:13:04.954421639Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=61838790" Jul 11 05:13:04.956187 containerd[1533]: time="2025-07-11T05:13:04.956139891Z" level=info msg="ImageCreate event name:\"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:04.962246 containerd[1533]: time="2025-07-11T05:13:04.962115669Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:04.964048 containerd[1533]: time="2025-07-11T05:13:04.963653427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"61838636\" in 2.112461583s" Jul 11 05:13:04.964048 containerd[1533]: time="2025-07-11T05:13:04.964047297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:1389d38feb576cfff09a57a2c028a53e51a72c658f295166960f770eaf07985f\"" Jul 11 05:13:04.966901 containerd[1533]: time="2025-07-11T05:13:04.966856272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 11 05:13:04.968099 containerd[1533]: time="2025-07-11T05:13:04.967952836Z" level=info msg="CreateContainer within sandbox \"053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 11 05:13:04.981004 containerd[1533]: time="2025-07-11T05:13:04.979840188Z" level=info msg="Container 6e20e2e9cb603e7641ac94b992e08e1e6e9b2d501bec1b48f5698d415b50e74f: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:13:04.990206 containerd[1533]: time="2025-07-11T05:13:04.990156539Z" level=info msg="CreateContainer within sandbox \"053e4b7d5d6b6cea199f413f0b8c188962d2d3a879f74f035400defe6c91002c\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"6e20e2e9cb603e7641ac94b992e08e1e6e9b2d501bec1b48f5698d415b50e74f\"" Jul 11 05:13:04.991012 containerd[1533]: time="2025-07-11T05:13:04.990702380Z" level=info msg="StartContainer for \"6e20e2e9cb603e7641ac94b992e08e1e6e9b2d501bec1b48f5698d415b50e74f\"" Jul 11 05:13:04.992638 containerd[1533]: time="2025-07-11T05:13:04.992562523Z" level=info msg="connecting to shim 6e20e2e9cb603e7641ac94b992e08e1e6e9b2d501bec1b48f5698d415b50e74f" address="unix:///run/containerd/s/79fe01ba9af25eb4c9f5c9f2ad877d27b26abcfc77c06063ceaec95792bc5c10" protocol=ttrpc version=3 Jul 11 05:13:05.010565 systemd-networkd[1435]: calia401881a942: Link UP Jul 11 05:13:05.012185 systemd-networkd[1435]: calia401881a942: Gained carrier Jul 11 05:13:05.027416 systemd[1]: Started cri-containerd-6e20e2e9cb603e7641ac94b992e08e1e6e9b2d501bec1b48f5698d415b50e74f.scope - libcontainer container 6e20e2e9cb603e7641ac94b992e08e1e6e9b2d501bec1b48f5698d415b50e74f. Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.915 [INFO][4796] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--mmj54-eth0 coredns-668d6bf9bc- kube-system 21ff705d-19fe-42bb-bec5-e77722b62149 814 0 2025-07-11 05:12:30 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-mmj54 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia401881a942 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmj54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mmj54-" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.915 [INFO][4796] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmj54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mmj54-eth0" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.948 [INFO][4821] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" HandleID="k8s-pod-network.fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" Workload="localhost-k8s-coredns--668d6bf9bc--mmj54-eth0" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.948 [INFO][4821] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" HandleID="k8s-pod-network.fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" Workload="localhost-k8s-coredns--668d6bf9bc--mmj54-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004b4600), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-mmj54", "timestamp":"2025-07-11 05:13:04.948325772 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.949 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.949 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.949 [INFO][4821] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.961 [INFO][4821] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" host="localhost" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.966 [INFO][4821] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.973 [INFO][4821] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.975 [INFO][4821] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.978 [INFO][4821] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.978 [INFO][4821] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" host="localhost" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.982 [INFO][4821] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.987 [INFO][4821] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" host="localhost" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.993 [INFO][4821] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" host="localhost" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.993 [INFO][4821] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" host="localhost" Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.993 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:13:05.031326 containerd[1533]: 2025-07-11 05:13:04.993 [INFO][4821] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" HandleID="k8s-pod-network.fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" Workload="localhost-k8s-coredns--668d6bf9bc--mmj54-eth0" Jul 11 05:13:05.031809 containerd[1533]: 2025-07-11 05:13:05.001 [INFO][4796] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmj54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mmj54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mmj54-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21ff705d-19fe-42bb-bec5-e77722b62149", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-mmj54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia401881a942", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:05.031809 containerd[1533]: 2025-07-11 05:13:05.003 [INFO][4796] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmj54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mmj54-eth0" Jul 11 05:13:05.031809 containerd[1533]: 2025-07-11 05:13:05.003 [INFO][4796] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia401881a942 ContainerID="fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmj54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mmj54-eth0" Jul 11 05:13:05.031809 containerd[1533]: 2025-07-11 05:13:05.010 [INFO][4796] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmj54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mmj54-eth0" Jul 11 05:13:05.031809 containerd[1533]: 2025-07-11 05:13:05.013 [INFO][4796] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmj54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mmj54-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--mmj54-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"21ff705d-19fe-42bb-bec5-e77722b62149", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae", Pod:"coredns-668d6bf9bc-mmj54", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia401881a942", MAC:"0a:5e:5b:27:9b:6f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:05.032892 containerd[1533]: 2025-07-11 05:13:05.025 [INFO][4796] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" Namespace="kube-system" Pod="coredns-668d6bf9bc-mmj54" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--mmj54-eth0" Jul 11 05:13:05.062460 containerd[1533]: time="2025-07-11T05:13:05.062415607Z" level=info msg="connecting to shim fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae" address="unix:///run/containerd/s/025604ff2bd3b1ed5534b6f23703eaa441b9053c89a79925d0b3ef5c08ee48a7" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:13:05.095382 systemd[1]: Started cri-containerd-fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae.scope - libcontainer container fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae. Jul 11 05:13:05.099991 containerd[1533]: time="2025-07-11T05:13:05.099934655Z" level=info msg="StartContainer for \"6e20e2e9cb603e7641ac94b992e08e1e6e9b2d501bec1b48f5698d415b50e74f\" returns successfully" Jul 11 05:13:05.124264 systemd-networkd[1435]: cali9ba05e0e2c7: Link UP Jul 11 05:13:05.127298 systemd-networkd[1435]: cali9ba05e0e2c7: Gained carrier Jul 11 05:13:05.129319 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:04.925 [INFO][4812] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0 calico-apiserver-6b75654db8- calico-apiserver 46dce721-fca6-4fb3-90cb-6ea1278e55d3 808 0 2025-07-11 05:12:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b75654db8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6b75654db8-xnd5b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9ba05e0e2c7 [] [] }} ContainerID="1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-xnd5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--xnd5b-" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:04.925 [INFO][4812] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-xnd5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:04.964 [INFO][4828] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" HandleID="k8s-pod-network.1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" Workload="localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:04.964 [INFO][4828] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" HandleID="k8s-pod-network.1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" Workload="localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001af3a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6b75654db8-xnd5b", "timestamp":"2025-07-11 05:13:04.964549056 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:04.964 [INFO][4828] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:04.994 [INFO][4828] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:04.994 [INFO][4828] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.063 [INFO][4828] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" host="localhost" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.071 [INFO][4828] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.082 [INFO][4828] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.084 [INFO][4828] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.087 [INFO][4828] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.087 [INFO][4828] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" host="localhost" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.092 [INFO][4828] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917 Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.100 [INFO][4828] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" host="localhost" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.109 [INFO][4828] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" host="localhost" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.110 [INFO][4828] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" host="localhost" Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.110 [INFO][4828] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:13:05.144383 containerd[1533]: 2025-07-11 05:13:05.110 [INFO][4828] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" HandleID="k8s-pod-network.1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" Workload="localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0" Jul 11 05:13:05.144877 containerd[1533]: 2025-07-11 05:13:05.116 [INFO][4812] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-xnd5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0", GenerateName:"calico-apiserver-6b75654db8-", Namespace:"calico-apiserver", SelfLink:"", UID:"46dce721-fca6-4fb3-90cb-6ea1278e55d3", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b75654db8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6b75654db8-xnd5b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ba05e0e2c7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:05.144877 containerd[1533]: 2025-07-11 05:13:05.116 [INFO][4812] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-xnd5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0" Jul 11 05:13:05.144877 containerd[1533]: 2025-07-11 05:13:05.116 [INFO][4812] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ba05e0e2c7 ContainerID="1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-xnd5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0" Jul 11 05:13:05.144877 containerd[1533]: 2025-07-11 05:13:05.126 [INFO][4812] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-xnd5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0" Jul 11 05:13:05.144877 containerd[1533]: 2025-07-11 05:13:05.126 [INFO][4812] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-xnd5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0", GenerateName:"calico-apiserver-6b75654db8-", Namespace:"calico-apiserver", SelfLink:"", UID:"46dce721-fca6-4fb3-90cb-6ea1278e55d3", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b75654db8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917", Pod:"calico-apiserver-6b75654db8-xnd5b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9ba05e0e2c7", MAC:"6a:3d:0c:cb:35:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:05.144877 containerd[1533]: 2025-07-11 05:13:05.138 [INFO][4812] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" Namespace="calico-apiserver" Pod="calico-apiserver-6b75654db8-xnd5b" WorkloadEndpoint="localhost-k8s-calico--apiserver--6b75654db8--xnd5b-eth0" Jul 11 05:13:05.164483 containerd[1533]: time="2025-07-11T05:13:05.163931566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-mmj54,Uid:21ff705d-19fe-42bb-bec5-e77722b62149,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae\"" Jul 11 05:13:05.171414 containerd[1533]: time="2025-07-11T05:13:05.171377323Z" level=info msg="CreateContainer within sandbox \"fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 11 05:13:05.178552 containerd[1533]: time="2025-07-11T05:13:05.178465213Z" level=info msg="Container 02b7c03c92c56f8eb1e2fbeb15e7559d395cf1f962c0d3ac3e90eb003248c45e: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:13:05.184987 containerd[1533]: time="2025-07-11T05:13:05.184888134Z" level=info msg="connecting to shim 1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917" address="unix:///run/containerd/s/0d4271194ecfa4e26fce22a1dcdce29996f0e3d1c1c8c20f2c9cf84094031940" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:13:05.186420 containerd[1533]: time="2025-07-11T05:13:05.186370605Z" level=info msg="CreateContainer within sandbox \"fc8b0bb9c0a397db7d43488cbb5daca842b5a3ea6f8cefcb0f0222a6fcc107ae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02b7c03c92c56f8eb1e2fbeb15e7559d395cf1f962c0d3ac3e90eb003248c45e\"" Jul 11 05:13:05.188272 containerd[1533]: time="2025-07-11T05:13:05.188231744Z" level=info msg="StartContainer for \"02b7c03c92c56f8eb1e2fbeb15e7559d395cf1f962c0d3ac3e90eb003248c45e\"" Jul 11 05:13:05.190991 containerd[1533]: time="2025-07-11T05:13:05.190525556Z" level=info msg="connecting to shim 02b7c03c92c56f8eb1e2fbeb15e7559d395cf1f962c0d3ac3e90eb003248c45e" address="unix:///run/containerd/s/025604ff2bd3b1ed5534b6f23703eaa441b9053c89a79925d0b3ef5c08ee48a7" protocol=ttrpc version=3 Jul 11 05:13:05.221365 systemd[1]: Started cri-containerd-1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917.scope - libcontainer container 1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917. Jul 11 05:13:05.239119 systemd[1]: Started cri-containerd-02b7c03c92c56f8eb1e2fbeb15e7559d395cf1f962c0d3ac3e90eb003248c45e.scope - libcontainer container 02b7c03c92c56f8eb1e2fbeb15e7559d395cf1f962c0d3ac3e90eb003248c45e. Jul 11 05:13:05.242370 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:13:05.267372 containerd[1533]: time="2025-07-11T05:13:05.267287062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b75654db8-xnd5b,Uid:46dce721-fca6-4fb3-90cb-6ea1278e55d3,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917\"" Jul 11 05:13:05.272999 containerd[1533]: time="2025-07-11T05:13:05.272888681Z" level=info msg="CreateContainer within sandbox \"1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 11 05:13:05.273806 containerd[1533]: time="2025-07-11T05:13:05.273769707Z" level=info msg="StartContainer for \"02b7c03c92c56f8eb1e2fbeb15e7559d395cf1f962c0d3ac3e90eb003248c45e\" returns successfully" Jul 11 05:13:05.281993 containerd[1533]: time="2025-07-11T05:13:05.281460883Z" level=info msg="Container 7acf4d6d4c8f05bc78ba2179144d3aba02e45cb83b52d14b4fb4a76d03300ce2: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:13:05.289137 containerd[1533]: time="2025-07-11T05:13:05.289087974Z" level=info msg="CreateContainer within sandbox \"1d558e52bb66daa37a833cfcc36064db9c0dc45deeac42622fdaa106f65b2917\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7acf4d6d4c8f05bc78ba2179144d3aba02e45cb83b52d14b4fb4a76d03300ce2\"" Jul 11 05:13:05.290922 containerd[1533]: time="2025-07-11T05:13:05.290889909Z" level=info msg="StartContainer for \"7acf4d6d4c8f05bc78ba2179144d3aba02e45cb83b52d14b4fb4a76d03300ce2\"" Jul 11 05:13:05.293680 containerd[1533]: time="2025-07-11T05:13:05.293644675Z" level=info msg="connecting to shim 7acf4d6d4c8f05bc78ba2179144d3aba02e45cb83b52d14b4fb4a76d03300ce2" address="unix:///run/containerd/s/0d4271194ecfa4e26fce22a1dcdce29996f0e3d1c1c8c20f2c9cf84094031940" protocol=ttrpc version=3 Jul 11 05:13:05.314136 systemd[1]: Started cri-containerd-7acf4d6d4c8f05bc78ba2179144d3aba02e45cb83b52d14b4fb4a76d03300ce2.scope - libcontainer container 7acf4d6d4c8f05bc78ba2179144d3aba02e45cb83b52d14b4fb4a76d03300ce2. Jul 11 05:13:05.351444 containerd[1533]: time="2025-07-11T05:13:05.351266908Z" level=info msg="StartContainer for \"7acf4d6d4c8f05bc78ba2179144d3aba02e45cb83b52d14b4fb4a76d03300ce2\" returns successfully" Jul 11 05:13:05.818107 containerd[1533]: time="2025-07-11T05:13:05.818064369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv579,Uid:23f37a14-7ca7-435f-adc0-f7dd1f70c437,Namespace:calico-system,Attempt:0,}" Jul 11 05:13:05.942049 systemd-networkd[1435]: cali28a8b1e9b98: Link UP Jul 11 05:13:05.942214 systemd-networkd[1435]: cali28a8b1e9b98: Gained carrier Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.867 [INFO][5059] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--sv579-eth0 csi-node-driver- calico-system 23f37a14-7ca7-435f-adc0-f7dd1f70c437 714 0 2025-07-11 05:12:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-sv579 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali28a8b1e9b98 [] [] }} ContainerID="418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" Namespace="calico-system" Pod="csi-node-driver-sv579" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv579-" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.867 [INFO][5059] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" Namespace="calico-system" Pod="csi-node-driver-sv579" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv579-eth0" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.893 [INFO][5074] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" HandleID="k8s-pod-network.418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" Workload="localhost-k8s-csi--node--driver--sv579-eth0" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.893 [INFO][5074] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" HandleID="k8s-pod-network.418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" Workload="localhost-k8s-csi--node--driver--sv579-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034afe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-sv579", "timestamp":"2025-07-11 05:13:05.893673068 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.893 [INFO][5074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.893 [INFO][5074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.893 [INFO][5074] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.904 [INFO][5074] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" host="localhost" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.910 [INFO][5074] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.916 [INFO][5074] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.920 [INFO][5074] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.924 [INFO][5074] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.924 [INFO][5074] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" host="localhost" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.925 [INFO][5074] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788 Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.929 [INFO][5074] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" host="localhost" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.935 [INFO][5074] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" host="localhost" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.936 [INFO][5074] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" host="localhost" Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.936 [INFO][5074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 11 05:13:05.963551 containerd[1533]: 2025-07-11 05:13:05.936 [INFO][5074] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" HandleID="k8s-pod-network.418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" Workload="localhost-k8s-csi--node--driver--sv579-eth0" Jul 11 05:13:05.964234 containerd[1533]: 2025-07-11 05:13:05.939 [INFO][5059] cni-plugin/k8s.go 418: Populated endpoint ContainerID="418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" Namespace="calico-system" Pod="csi-node-driver-sv579" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv579-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sv579-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23f37a14-7ca7-435f-adc0-f7dd1f70c437", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-sv579", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28a8b1e9b98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:05.964234 containerd[1533]: 2025-07-11 05:13:05.939 [INFO][5059] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" Namespace="calico-system" Pod="csi-node-driver-sv579" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv579-eth0" Jul 11 05:13:05.964234 containerd[1533]: 2025-07-11 05:13:05.939 [INFO][5059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali28a8b1e9b98 ContainerID="418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" Namespace="calico-system" Pod="csi-node-driver-sv579" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv579-eth0" Jul 11 05:13:05.964234 containerd[1533]: 2025-07-11 05:13:05.944 [INFO][5059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" Namespace="calico-system" Pod="csi-node-driver-sv579" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv579-eth0" Jul 11 05:13:05.964234 containerd[1533]: 2025-07-11 05:13:05.946 [INFO][5059] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" Namespace="calico-system" Pod="csi-node-driver-sv579" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv579-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--sv579-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"23f37a14-7ca7-435f-adc0-f7dd1f70c437", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.July, 11, 5, 12, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788", Pod:"csi-node-driver-sv579", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali28a8b1e9b98", MAC:"ee:ae:d7:16:6c:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 11 05:13:05.964234 containerd[1533]: 2025-07-11 05:13:05.957 [INFO][5059] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" Namespace="calico-system" Pod="csi-node-driver-sv579" WorkloadEndpoint="localhost-k8s-csi--node--driver--sv579-eth0" Jul 11 05:13:06.003433 containerd[1533]: time="2025-07-11T05:13:06.003352154Z" level=info msg="connecting to shim 418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788" address="unix:///run/containerd/s/31dc3f2e8508c06e0aaddee5a15997391c5de7ea2337aa051479a29dff681fcb" namespace=k8s.io protocol=ttrpc version=3 Jul 11 05:13:06.039137 systemd[1]: Started cri-containerd-418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788.scope - libcontainer container 418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788. Jul 11 05:13:06.053596 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 11 05:13:06.071091 containerd[1533]: time="2025-07-11T05:13:06.070995903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-sv579,Uid:23f37a14-7ca7-435f-adc0-f7dd1f70c437,Namespace:calico-system,Attempt:0,} returns sandbox id \"418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788\"" Jul 11 05:13:06.114195 kubelet[2671]: I0711 05:13:06.113508 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-45rdt" podStartSLOduration=22.157869727 podStartE2EDuration="25.113489651s" podCreationTimestamp="2025-07-11 05:12:41 +0000 UTC" firstStartedPulling="2025-07-11 05:13:02.010892922 +0000 UTC m=+38.269342161" lastFinishedPulling="2025-07-11 05:13:04.966512886 +0000 UTC m=+41.224962085" observedRunningTime="2025-07-11 05:13:06.082762923 +0000 UTC m=+42.341212162" watchObservedRunningTime="2025-07-11 05:13:06.113489651 +0000 UTC m=+42.371938890" Jul 11 05:13:06.114195 kubelet[2671]: I0711 05:13:06.113762 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6b75654db8-xnd5b" podStartSLOduration=26.113756231 podStartE2EDuration="26.113756231s" podCreationTimestamp="2025-07-11 05:12:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:13:06.113643863 +0000 UTC m=+42.372093102" watchObservedRunningTime="2025-07-11 05:13:06.113756231 +0000 UTC m=+42.372205470" Jul 11 05:13:06.129958 kubelet[2671]: I0711 05:13:06.129869 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-mmj54" podStartSLOduration=36.129831567 podStartE2EDuration="36.129831567s" podCreationTimestamp="2025-07-11 05:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-11 05:13:06.12700844 +0000 UTC m=+42.385457719" watchObservedRunningTime="2025-07-11 05:13:06.129831567 +0000 UTC m=+42.388280806" Jul 11 05:13:06.208118 systemd-networkd[1435]: cali9ba05e0e2c7: Gained IPv6LL Jul 11 05:13:06.848095 systemd-networkd[1435]: calia401881a942: Gained IPv6LL Jul 11 05:13:06.976112 systemd-networkd[1435]: cali28a8b1e9b98: Gained IPv6LL Jul 11 05:13:07.080003 kubelet[2671]: I0711 05:13:07.079769 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 05:13:07.482423 containerd[1533]: time="2025-07-11T05:13:07.482378113Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:07.483035 containerd[1533]: time="2025-07-11T05:13:07.483008358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=48128336" Jul 11 05:13:07.483985 containerd[1533]: time="2025-07-11T05:13:07.483915863Z" level=info msg="ImageCreate event name:\"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:07.486438 containerd[1533]: time="2025-07-11T05:13:07.486400921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:07.487200 containerd[1533]: time="2025-07-11T05:13:07.487155895Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"49497545\" in 2.520250179s" Jul 11 05:13:07.487330 containerd[1533]: time="2025-07-11T05:13:07.487184857Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:ba9e7793995ca67a9b78aa06adda4e89cbd435b1e88ab1032ca665140517fa7a\"" Jul 11 05:13:07.493717 containerd[1533]: time="2025-07-11T05:13:07.493678122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 11 05:13:07.500570 containerd[1533]: time="2025-07-11T05:13:07.500535213Z" level=info msg="CreateContainer within sandbox \"7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 11 05:13:07.514340 containerd[1533]: time="2025-07-11T05:13:07.514084742Z" level=info msg="Container d41d618d50fe38193e6b8fd9fc524a4bc71b101e800fde8f68760dc99bd028f4: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:13:07.520275 containerd[1533]: time="2025-07-11T05:13:07.520233822Z" level=info msg="CreateContainer within sandbox \"7aa447661da11fbbca5df5f5aa2aa1b5f933238a1a5d5b717a20b385c0e23126\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d41d618d50fe38193e6b8fd9fc524a4bc71b101e800fde8f68760dc99bd028f4\"" Jul 11 05:13:07.520843 containerd[1533]: time="2025-07-11T05:13:07.520763180Z" level=info msg="StartContainer for \"d41d618d50fe38193e6b8fd9fc524a4bc71b101e800fde8f68760dc99bd028f4\"" Jul 11 05:13:07.522286 containerd[1533]: time="2025-07-11T05:13:07.522234486Z" level=info msg="connecting to shim d41d618d50fe38193e6b8fd9fc524a4bc71b101e800fde8f68760dc99bd028f4" address="unix:///run/containerd/s/0dda6adc01a2d4bda2e9367d977eeadea1cb6b40e76860b8aa1ee0f344030109" protocol=ttrpc version=3 Jul 11 05:13:07.546139 systemd[1]: Started cri-containerd-d41d618d50fe38193e6b8fd9fc524a4bc71b101e800fde8f68760dc99bd028f4.scope - libcontainer container d41d618d50fe38193e6b8fd9fc524a4bc71b101e800fde8f68760dc99bd028f4. Jul 11 05:13:07.588253 containerd[1533]: time="2025-07-11T05:13:07.588158164Z" level=info msg="StartContainer for \"d41d618d50fe38193e6b8fd9fc524a4bc71b101e800fde8f68760dc99bd028f4\" returns successfully" Jul 11 05:13:08.095315 kubelet[2671]: I0711 05:13:08.095233 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5fc56655d8-fzxgs" podStartSLOduration=21.769045678 podStartE2EDuration="26.095213594s" podCreationTimestamp="2025-07-11 05:12:42 +0000 UTC" firstStartedPulling="2025-07-11 05:13:03.16673279 +0000 UTC m=+39.425182029" lastFinishedPulling="2025-07-11 05:13:07.492900706 +0000 UTC m=+43.751349945" observedRunningTime="2025-07-11 05:13:08.093960066 +0000 UTC m=+44.352409305" watchObservedRunningTime="2025-07-11 05:13:08.095213594 +0000 UTC m=+44.353662833" Jul 11 05:13:08.550608 systemd[1]: Started sshd@8-10.0.0.147:22-10.0.0.1:53404.service - OpenSSH per-connection server daemon (10.0.0.1:53404). Jul 11 05:13:08.629561 sshd[5199]: Accepted publickey for core from 10.0.0.1 port 53404 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:08.631846 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:08.636884 systemd-logind[1508]: New session 9 of user core. Jul 11 05:13:08.646181 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 11 05:13:08.728297 containerd[1533]: time="2025-07-11T05:13:08.728255159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:08.728879 containerd[1533]: time="2025-07-11T05:13:08.728718752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8225702" Jul 11 05:13:08.730027 containerd[1533]: time="2025-07-11T05:13:08.729963879Z" level=info msg="ImageCreate event name:\"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:08.731798 containerd[1533]: time="2025-07-11T05:13:08.731749124Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:08.732200 containerd[1533]: time="2025-07-11T05:13:08.732167953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"9594943\" in 1.238136926s" Jul 11 05:13:08.732200 containerd[1533]: time="2025-07-11T05:13:08.732197396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:14ecfabbdbebd1f5a36708f8b11a95a43baddd6a935d7d78c89a9c333849fcd2\"" Jul 11 05:13:08.735716 containerd[1533]: time="2025-07-11T05:13:08.735691320Z" level=info msg="CreateContainer within sandbox \"418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 11 05:13:08.744336 containerd[1533]: time="2025-07-11T05:13:08.744290203Z" level=info msg="Container d63f31ea9061d3c3a9678ca8d6ae9e78ddec7d4bb0294fd215c3d9cc3ad52d30: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:13:08.754482 containerd[1533]: time="2025-07-11T05:13:08.754102171Z" level=info msg="CreateContainer within sandbox \"418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d63f31ea9061d3c3a9678ca8d6ae9e78ddec7d4bb0294fd215c3d9cc3ad52d30\"" Jul 11 05:13:08.755176 containerd[1533]: time="2025-07-11T05:13:08.754938709Z" level=info msg="StartContainer for \"d63f31ea9061d3c3a9678ca8d6ae9e78ddec7d4bb0294fd215c3d9cc3ad52d30\"" Jul 11 05:13:08.756814 containerd[1533]: time="2025-07-11T05:13:08.756786799Z" level=info msg="connecting to shim d63f31ea9061d3c3a9678ca8d6ae9e78ddec7d4bb0294fd215c3d9cc3ad52d30" address="unix:///run/containerd/s/31dc3f2e8508c06e0aaddee5a15997391c5de7ea2337aa051479a29dff681fcb" protocol=ttrpc version=3 Jul 11 05:13:08.790225 systemd[1]: Started cri-containerd-d63f31ea9061d3c3a9678ca8d6ae9e78ddec7d4bb0294fd215c3d9cc3ad52d30.scope - libcontainer container d63f31ea9061d3c3a9678ca8d6ae9e78ddec7d4bb0294fd215c3d9cc3ad52d30. Jul 11 05:13:08.912100 containerd[1533]: time="2025-07-11T05:13:08.911946313Z" level=info msg="StartContainer for \"d63f31ea9061d3c3a9678ca8d6ae9e78ddec7d4bb0294fd215c3d9cc3ad52d30\" returns successfully" Jul 11 05:13:08.914328 containerd[1533]: time="2025-07-11T05:13:08.914285677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 11 05:13:08.929350 sshd[5202]: Connection closed by 10.0.0.1 port 53404 Jul 11 05:13:08.929693 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:08.933104 systemd[1]: sshd@8-10.0.0.147:22-10.0.0.1:53404.service: Deactivated successfully. Jul 11 05:13:08.934751 systemd[1]: session-9.scope: Deactivated successfully. Jul 11 05:13:08.935454 systemd-logind[1508]: Session 9 logged out. Waiting for processes to exit. Jul 11 05:13:08.936422 systemd-logind[1508]: Removed session 9. Jul 11 05:13:09.136276 containerd[1533]: time="2025-07-11T05:13:09.136236884Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d41d618d50fe38193e6b8fd9fc524a4bc71b101e800fde8f68760dc99bd028f4\" id:\"866999acb0e1fbca5ebffd5c93cb92105d2f1efa23d463de50be63821c041c12\" pid:5263 exited_at:{seconds:1752210789 nanos:135895180}" Jul 11 05:13:09.816725 containerd[1533]: time="2025-07-11T05:13:09.816683342Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e20e2e9cb603e7641ac94b992e08e1e6e9b2d501bec1b48f5698d415b50e74f\" id:\"5a52b588185cb5d5e9399a62e5601340d695a587da351edb3d9b64ca5bc313ae\" pid:5287 exited_at:{seconds:1752210789 nanos:814036040}" Jul 11 05:13:10.008031 containerd[1533]: time="2025-07-11T05:13:10.007962951Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:10.008393 containerd[1533]: time="2025-07-11T05:13:10.008352577Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=13754366" Jul 11 05:13:10.009183 containerd[1533]: time="2025-07-11T05:13:10.009157751Z" level=info msg="ImageCreate event name:\"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:10.011034 containerd[1533]: time="2025-07-11T05:13:10.010980594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 11 05:13:10.011765 containerd[1533]: time="2025-07-11T05:13:10.011584035Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"15123559\" in 1.097250955s" Jul 11 05:13:10.011765 containerd[1533]: time="2025-07-11T05:13:10.011616557Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:664ed31fb4687b0de23d6e6e116bc87b236790d7355871d3237c54452e02e27c\"" Jul 11 05:13:10.013743 containerd[1533]: time="2025-07-11T05:13:10.013715259Z" level=info msg="CreateContainer within sandbox \"418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 11 05:13:10.021986 containerd[1533]: time="2025-07-11T05:13:10.021769201Z" level=info msg="Container 76c7fdeebafecc83700ca72eb0ab2717d0e3d2d74fe32430c25b8b5841bfc5b5: CDI devices from CRI Config.CDIDevices: []" Jul 11 05:13:10.028961 containerd[1533]: time="2025-07-11T05:13:10.028897042Z" level=info msg="CreateContainer within sandbox \"418198c93aa50ed78c7de8894adb13069af8caca71d16f3e117b25882fa30788\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"76c7fdeebafecc83700ca72eb0ab2717d0e3d2d74fe32430c25b8b5841bfc5b5\"" Jul 11 05:13:10.029403 containerd[1533]: time="2025-07-11T05:13:10.029360553Z" level=info msg="StartContainer for \"76c7fdeebafecc83700ca72eb0ab2717d0e3d2d74fe32430c25b8b5841bfc5b5\"" Jul 11 05:13:10.030912 containerd[1533]: time="2025-07-11T05:13:10.030848133Z" level=info msg="connecting to shim 76c7fdeebafecc83700ca72eb0ab2717d0e3d2d74fe32430c25b8b5841bfc5b5" address="unix:///run/containerd/s/31dc3f2e8508c06e0aaddee5a15997391c5de7ea2337aa051479a29dff681fcb" protocol=ttrpc version=3 Jul 11 05:13:10.052115 systemd[1]: Started cri-containerd-76c7fdeebafecc83700ca72eb0ab2717d0e3d2d74fe32430c25b8b5841bfc5b5.scope - libcontainer container 76c7fdeebafecc83700ca72eb0ab2717d0e3d2d74fe32430c25b8b5841bfc5b5. Jul 11 05:13:10.082578 containerd[1533]: time="2025-07-11T05:13:10.082476052Z" level=info msg="StartContainer for \"76c7fdeebafecc83700ca72eb0ab2717d0e3d2d74fe32430c25b8b5841bfc5b5\" returns successfully" Jul 11 05:13:10.106103 kubelet[2671]: I0711 05:13:10.105890 2671 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-sv579" podStartSLOduration=24.170659332 podStartE2EDuration="28.105872108s" podCreationTimestamp="2025-07-11 05:12:42 +0000 UTC" firstStartedPulling="2025-07-11 05:13:06.077021023 +0000 UTC m=+42.335470262" lastFinishedPulling="2025-07-11 05:13:10.012233799 +0000 UTC m=+46.270683038" observedRunningTime="2025-07-11 05:13:10.105506123 +0000 UTC m=+46.363955322" watchObservedRunningTime="2025-07-11 05:13:10.105872108 +0000 UTC m=+46.364321347" Jul 11 05:13:10.909608 kubelet[2671]: I0711 05:13:10.909557 2671 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 11 05:13:10.909608 kubelet[2671]: I0711 05:13:10.909605 2671 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 11 05:13:13.942383 systemd[1]: Started sshd@9-10.0.0.147:22-10.0.0.1:58952.service - OpenSSH per-connection server daemon (10.0.0.1:58952). Jul 11 05:13:13.992444 sshd[5346]: Accepted publickey for core from 10.0.0.1 port 58952 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:13.994612 sshd-session[5346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:14.003455 systemd-logind[1508]: New session 10 of user core. Jul 11 05:13:14.007134 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 11 05:13:14.169884 sshd[5350]: Connection closed by 10.0.0.1 port 58952 Jul 11 05:13:14.170539 sshd-session[5346]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:14.178997 systemd[1]: sshd@9-10.0.0.147:22-10.0.0.1:58952.service: Deactivated successfully. Jul 11 05:13:14.180984 systemd[1]: session-10.scope: Deactivated successfully. Jul 11 05:13:14.181790 systemd-logind[1508]: Session 10 logged out. Waiting for processes to exit. Jul 11 05:13:14.184610 systemd[1]: Started sshd@10-10.0.0.147:22-10.0.0.1:58968.service - OpenSSH per-connection server daemon (10.0.0.1:58968). Jul 11 05:13:14.185331 systemd-logind[1508]: Removed session 10. Jul 11 05:13:14.253631 sshd[5368]: Accepted publickey for core from 10.0.0.1 port 58968 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:14.254893 sshd-session[5368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:14.259035 systemd-logind[1508]: New session 11 of user core. Jul 11 05:13:14.265212 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 11 05:13:14.433411 sshd[5371]: Connection closed by 10.0.0.1 port 58968 Jul 11 05:13:14.434567 sshd-session[5368]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:14.445259 systemd[1]: sshd@10-10.0.0.147:22-10.0.0.1:58968.service: Deactivated successfully. Jul 11 05:13:14.448680 systemd[1]: session-11.scope: Deactivated successfully. Jul 11 05:13:14.453382 systemd-logind[1508]: Session 11 logged out. Waiting for processes to exit. Jul 11 05:13:14.457227 systemd[1]: Started sshd@11-10.0.0.147:22-10.0.0.1:58982.service - OpenSSH per-connection server daemon (10.0.0.1:58982). Jul 11 05:13:14.460160 systemd-logind[1508]: Removed session 11. Jul 11 05:13:14.513597 sshd[5382]: Accepted publickey for core from 10.0.0.1 port 58982 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:14.514327 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:14.519835 systemd-logind[1508]: New session 12 of user core. Jul 11 05:13:14.532155 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 11 05:13:14.687609 sshd[5385]: Connection closed by 10.0.0.1 port 58982 Jul 11 05:13:14.688294 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:14.691911 systemd[1]: sshd@11-10.0.0.147:22-10.0.0.1:58982.service: Deactivated successfully. Jul 11 05:13:14.694153 systemd[1]: session-12.scope: Deactivated successfully. Jul 11 05:13:14.694979 systemd-logind[1508]: Session 12 logged out. Waiting for processes to exit. Jul 11 05:13:14.696529 systemd-logind[1508]: Removed session 12. Jul 11 05:13:17.067348 kubelet[2671]: I0711 05:13:17.067306 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 05:13:17.138685 containerd[1533]: time="2025-07-11T05:13:17.138620435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e20e2e9cb603e7641ac94b992e08e1e6e9b2d501bec1b48f5698d415b50e74f\" id:\"d75f7585602ec4a700dc23294b2d9c84e9f43a99932bd31f184491cd3fa0086c\" pid:5410 exited_at:{seconds:1752210797 nanos:138307176}" Jul 11 05:13:17.208334 containerd[1533]: time="2025-07-11T05:13:17.208284632Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e20e2e9cb603e7641ac94b992e08e1e6e9b2d501bec1b48f5698d415b50e74f\" id:\"fa7b6f7e80d0231448be05ad5e8894f9c3dd061dc40171a1c319161785c66cb1\" pid:5435 exited_at:{seconds:1752210797 nanos:207767081}" Jul 11 05:13:19.699220 systemd[1]: Started sshd@12-10.0.0.147:22-10.0.0.1:58986.service - OpenSSH per-connection server daemon (10.0.0.1:58986). Jul 11 05:13:19.764586 sshd[5449]: Accepted publickey for core from 10.0.0.1 port 58986 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:19.765793 sshd-session[5449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:19.769835 systemd-logind[1508]: New session 13 of user core. Jul 11 05:13:19.788172 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 11 05:13:19.923087 sshd[5452]: Connection closed by 10.0.0.1 port 58986 Jul 11 05:13:19.923617 sshd-session[5449]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:19.936226 systemd[1]: sshd@12-10.0.0.147:22-10.0.0.1:58986.service: Deactivated successfully. Jul 11 05:13:19.938452 systemd[1]: session-13.scope: Deactivated successfully. Jul 11 05:13:19.939325 systemd-logind[1508]: Session 13 logged out. Waiting for processes to exit. Jul 11 05:13:19.941403 systemd[1]: Started sshd@13-10.0.0.147:22-10.0.0.1:59002.service - OpenSSH per-connection server daemon (10.0.0.1:59002). Jul 11 05:13:19.942553 systemd-logind[1508]: Removed session 13. Jul 11 05:13:20.002319 sshd[5465]: Accepted publickey for core from 10.0.0.1 port 59002 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:20.003451 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:20.007959 systemd-logind[1508]: New session 14 of user core. Jul 11 05:13:20.014133 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 11 05:13:20.200464 sshd[5468]: Connection closed by 10.0.0.1 port 59002 Jul 11 05:13:20.200997 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:20.222789 systemd[1]: sshd@13-10.0.0.147:22-10.0.0.1:59002.service: Deactivated successfully. Jul 11 05:13:20.225220 systemd[1]: session-14.scope: Deactivated successfully. Jul 11 05:13:20.225981 systemd-logind[1508]: Session 14 logged out. Waiting for processes to exit. Jul 11 05:13:20.228767 systemd[1]: Started sshd@14-10.0.0.147:22-10.0.0.1:59008.service - OpenSSH per-connection server daemon (10.0.0.1:59008). Jul 11 05:13:20.229277 systemd-logind[1508]: Removed session 14. Jul 11 05:13:20.281174 sshd[5479]: Accepted publickey for core from 10.0.0.1 port 59008 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:20.282232 sshd-session[5479]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:20.286029 systemd-logind[1508]: New session 15 of user core. Jul 11 05:13:20.300120 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 11 05:13:20.453432 containerd[1533]: time="2025-07-11T05:13:20.453351569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d41d618d50fe38193e6b8fd9fc524a4bc71b101e800fde8f68760dc99bd028f4\" id:\"426846939c5790f180eae2ca7f99fe5f62b8497272d0f3b80a92547ca8b14ae8\" pid:5502 exited_at:{seconds:1752210800 nanos:452864180}" Jul 11 05:13:20.977846 sshd[5482]: Connection closed by 10.0.0.1 port 59008 Jul 11 05:13:20.978274 sshd-session[5479]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:20.988409 systemd[1]: sshd@14-10.0.0.147:22-10.0.0.1:59008.service: Deactivated successfully. Jul 11 05:13:20.993472 systemd[1]: session-15.scope: Deactivated successfully. Jul 11 05:13:20.994831 systemd-logind[1508]: Session 15 logged out. Waiting for processes to exit. Jul 11 05:13:20.998303 systemd[1]: Started sshd@15-10.0.0.147:22-10.0.0.1:59016.service - OpenSSH per-connection server daemon (10.0.0.1:59016). Jul 11 05:13:21.000245 systemd-logind[1508]: Removed session 15. Jul 11 05:13:21.054667 sshd[5525]: Accepted publickey for core from 10.0.0.1 port 59016 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:21.055759 sshd-session[5525]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:21.059822 systemd-logind[1508]: New session 16 of user core. Jul 11 05:13:21.069120 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 11 05:13:21.340170 sshd[5528]: Connection closed by 10.0.0.1 port 59016 Jul 11 05:13:21.342228 sshd-session[5525]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:21.348054 systemd[1]: sshd@15-10.0.0.147:22-10.0.0.1:59016.service: Deactivated successfully. Jul 11 05:13:21.352492 systemd[1]: session-16.scope: Deactivated successfully. Jul 11 05:13:21.354688 systemd-logind[1508]: Session 16 logged out. Waiting for processes to exit. Jul 11 05:13:21.358008 systemd[1]: Started sshd@16-10.0.0.147:22-10.0.0.1:59024.service - OpenSSH per-connection server daemon (10.0.0.1:59024). Jul 11 05:13:21.359441 systemd-logind[1508]: Removed session 16. Jul 11 05:13:21.411207 sshd[5542]: Accepted publickey for core from 10.0.0.1 port 59024 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:21.412882 sshd-session[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:21.418191 systemd-logind[1508]: New session 17 of user core. Jul 11 05:13:21.424201 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 11 05:13:21.551851 sshd[5549]: Connection closed by 10.0.0.1 port 59024 Jul 11 05:13:21.552216 sshd-session[5542]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:21.555243 systemd[1]: sshd@16-10.0.0.147:22-10.0.0.1:59024.service: Deactivated successfully. Jul 11 05:13:21.558562 systemd[1]: session-17.scope: Deactivated successfully. Jul 11 05:13:21.560491 systemd-logind[1508]: Session 17 logged out. Waiting for processes to exit. Jul 11 05:13:21.561390 systemd-logind[1508]: Removed session 17. Jul 11 05:13:26.568203 systemd[1]: Started sshd@17-10.0.0.147:22-10.0.0.1:38950.service - OpenSSH per-connection server daemon (10.0.0.1:38950). Jul 11 05:13:26.627195 sshd[5569]: Accepted publickey for core from 10.0.0.1 port 38950 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:26.628280 sshd-session[5569]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:26.632040 systemd-logind[1508]: New session 18 of user core. Jul 11 05:13:26.645258 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 11 05:13:26.766025 sshd[5572]: Connection closed by 10.0.0.1 port 38950 Jul 11 05:13:26.765915 sshd-session[5569]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:26.769350 systemd[1]: sshd@17-10.0.0.147:22-10.0.0.1:38950.service: Deactivated successfully. Jul 11 05:13:26.771761 systemd[1]: session-18.scope: Deactivated successfully. Jul 11 05:13:26.772645 systemd-logind[1508]: Session 18 logged out. Waiting for processes to exit. Jul 11 05:13:26.773965 systemd-logind[1508]: Removed session 18. Jul 11 05:13:27.058301 containerd[1533]: time="2025-07-11T05:13:27.058264770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0524df5fa1efa5fff86d682c93868c9015587ed65c90210d9b36c0d2456bd0c0\" id:\"b84e71881fdad4945b72d30839a6379ab0ee5288faee8cb4c1266f5e8f55b03d\" pid:5596 exited_at:{seconds:1752210807 nanos:57961393}" Jul 11 05:13:31.218324 kubelet[2671]: I0711 05:13:31.218142 2671 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 11 05:13:31.777911 systemd[1]: Started sshd@18-10.0.0.147:22-10.0.0.1:38960.service - OpenSSH per-connection server daemon (10.0.0.1:38960). Jul 11 05:13:31.823904 sshd[5615]: Accepted publickey for core from 10.0.0.1 port 38960 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:31.825097 sshd-session[5615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:31.829036 systemd-logind[1508]: New session 19 of user core. Jul 11 05:13:31.835139 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 11 05:13:31.952833 sshd[5618]: Connection closed by 10.0.0.1 port 38960 Jul 11 05:13:31.952678 sshd-session[5615]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:31.956235 systemd[1]: sshd@18-10.0.0.147:22-10.0.0.1:38960.service: Deactivated successfully. Jul 11 05:13:31.958455 systemd[1]: session-19.scope: Deactivated successfully. Jul 11 05:13:31.959359 systemd-logind[1508]: Session 19 logged out. Waiting for processes to exit. Jul 11 05:13:31.960452 systemd-logind[1508]: Removed session 19. Jul 11 05:13:36.968276 systemd[1]: Started sshd@19-10.0.0.147:22-10.0.0.1:36176.service - OpenSSH per-connection server daemon (10.0.0.1:36176). Jul 11 05:13:37.024419 sshd[5631]: Accepted publickey for core from 10.0.0.1 port 36176 ssh2: RSA SHA256:rhUlpPvVlP+Ce62yA02n2qbsdDp0zaqTeZwlw15sny0 Jul 11 05:13:37.025880 sshd-session[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 11 05:13:37.030172 systemd-logind[1508]: New session 20 of user core. Jul 11 05:13:37.035130 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 11 05:13:37.185722 sshd[5634]: Connection closed by 10.0.0.1 port 36176 Jul 11 05:13:37.186168 sshd-session[5631]: pam_unix(sshd:session): session closed for user core Jul 11 05:13:37.190805 systemd[1]: sshd@19-10.0.0.147:22-10.0.0.1:36176.service: Deactivated successfully. Jul 11 05:13:37.192678 systemd[1]: session-20.scope: Deactivated successfully. Jul 11 05:13:37.194084 systemd-logind[1508]: Session 20 logged out. Waiting for processes to exit. Jul 11 05:13:37.195156 systemd-logind[1508]: Removed session 20.