Sep 10 04:51:47.754608 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 04:51:47.754627 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Sep 10 03:31:38 -00 2025 Sep 10 04:51:47.754637 kernel: KASLR enabled Sep 10 04:51:47.754643 kernel: efi: EFI v2.7 by EDK II Sep 10 04:51:47.754648 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 10 04:51:47.754654 kernel: random: crng init done Sep 10 04:51:47.754661 kernel: secureboot: Secure boot disabled Sep 10 04:51:47.754666 kernel: ACPI: Early table checksum verification disabled Sep 10 04:51:47.754672 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 10 04:51:47.754679 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 04:51:47.754685 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:51:47.754704 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:51:47.754710 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:51:47.754716 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:51:47.754723 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:51:47.754730 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:51:47.754744 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:51:47.754751 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:51:47.754757 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 04:51:47.754763 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 04:51:47.754769 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 04:51:47.754775 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 04:51:47.754781 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 10 04:51:47.754787 kernel: Zone ranges: Sep 10 04:51:47.754793 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 04:51:47.754802 kernel: DMA32 empty Sep 10 04:51:47.754808 kernel: Normal empty Sep 10 04:51:47.754814 kernel: Device empty Sep 10 04:51:47.754820 kernel: Movable zone start for each node Sep 10 04:51:47.754826 kernel: Early memory node ranges Sep 10 04:51:47.754832 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 10 04:51:47.754838 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 10 04:51:47.754845 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 10 04:51:47.754851 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 10 04:51:47.754857 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 10 04:51:47.754863 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 10 04:51:47.754869 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 10 04:51:47.754882 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 10 04:51:47.754888 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 10 04:51:47.754894 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 04:51:47.754903 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 04:51:47.754910 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 04:51:47.754916 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 04:51:47.754924 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 04:51:47.754941 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 04:51:47.754947 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 10 04:51:47.754953 kernel: psci: probing for conduit method from ACPI. Sep 10 04:51:47.754960 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 04:51:47.754966 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 04:51:47.754972 kernel: psci: Trusted OS migration not required Sep 10 04:51:47.754979 kernel: psci: SMC Calling Convention v1.1 Sep 10 04:51:47.754985 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 04:51:47.754991 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 04:51:47.754999 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 04:51:47.755006 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 04:51:47.755012 kernel: Detected PIPT I-cache on CPU0 Sep 10 04:51:47.755019 kernel: CPU features: detected: GIC system register CPU interface Sep 10 04:51:47.755025 kernel: CPU features: detected: Spectre-v4 Sep 10 04:51:47.755031 kernel: CPU features: detected: Spectre-BHB Sep 10 04:51:47.755037 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 04:51:47.755044 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 04:51:47.755050 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 04:51:47.755056 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 04:51:47.755062 kernel: alternatives: applying boot alternatives Sep 10 04:51:47.755070 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc557398806956d5b7cf8f58d9bd1545e6d9edee390c62eb4b21701fba26a284 Sep 10 04:51:47.755078 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 04:51:47.755085 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 04:51:47.755091 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 04:51:47.755098 kernel: Fallback order for Node 0: 0 Sep 10 04:51:47.755104 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 10 04:51:47.755110 kernel: Policy zone: DMA Sep 10 04:51:47.755117 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 04:51:47.755123 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 10 04:51:47.755129 kernel: software IO TLB: area num 4. Sep 10 04:51:47.755135 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 10 04:51:47.755142 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 10 04:51:47.755149 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 04:51:47.755156 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 04:51:47.755163 kernel: rcu: RCU event tracing is enabled. Sep 10 04:51:47.755169 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 04:51:47.755176 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 04:51:47.755182 kernel: Tracing variant of Tasks RCU enabled. Sep 10 04:51:47.755189 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 04:51:47.755195 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 04:51:47.755202 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 04:51:47.755208 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 04:51:47.755214 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 04:51:47.755222 kernel: GICv3: 256 SPIs implemented Sep 10 04:51:47.755228 kernel: GICv3: 0 Extended SPIs implemented Sep 10 04:51:47.755234 kernel: Root IRQ handler: gic_handle_irq Sep 10 04:51:47.755241 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 04:51:47.755247 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 04:51:47.755253 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 04:51:47.755260 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 04:51:47.755266 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 10 04:51:47.755273 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 10 04:51:47.755280 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 10 04:51:47.755286 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 10 04:51:47.755293 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 04:51:47.755301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 04:51:47.755307 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 04:51:47.755314 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 04:51:47.755321 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 04:51:47.755327 kernel: arm-pv: using stolen time PV Sep 10 04:51:47.755334 kernel: Console: colour dummy device 80x25 Sep 10 04:51:47.755340 kernel: ACPI: Core revision 20240827 Sep 10 04:51:47.755347 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 04:51:47.755353 kernel: pid_max: default: 32768 minimum: 301 Sep 10 04:51:47.755360 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 04:51:47.755367 kernel: landlock: Up and running. Sep 10 04:51:47.755374 kernel: SELinux: Initializing. Sep 10 04:51:47.755380 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 04:51:47.755387 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 04:51:47.755393 kernel: rcu: Hierarchical SRCU implementation. Sep 10 04:51:47.755406 kernel: rcu: Max phase no-delay instances is 400. Sep 10 04:51:47.755412 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 04:51:47.755419 kernel: Remapping and enabling EFI services. Sep 10 04:51:47.755426 kernel: smp: Bringing up secondary CPUs ... Sep 10 04:51:47.755437 kernel: Detected PIPT I-cache on CPU1 Sep 10 04:51:47.755444 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 04:51:47.755451 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 10 04:51:47.755459 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 04:51:47.755466 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 04:51:47.755473 kernel: Detected PIPT I-cache on CPU2 Sep 10 04:51:47.755480 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 04:51:47.755487 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 10 04:51:47.755495 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 04:51:47.755502 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 04:51:47.755509 kernel: Detected PIPT I-cache on CPU3 Sep 10 04:51:47.755516 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 04:51:47.755523 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 10 04:51:47.755530 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 04:51:47.755537 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 04:51:47.755543 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 04:51:47.755550 kernel: SMP: Total of 4 processors activated. Sep 10 04:51:47.755558 kernel: CPU: All CPU(s) started at EL1 Sep 10 04:51:47.755565 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 04:51:47.755572 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 04:51:47.755579 kernel: CPU features: detected: Common not Private translations Sep 10 04:51:47.755586 kernel: CPU features: detected: CRC32 instructions Sep 10 04:51:47.755593 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 04:51:47.755599 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 04:51:47.755606 kernel: CPU features: detected: LSE atomic instructions Sep 10 04:51:47.755613 kernel: CPU features: detected: Privileged Access Never Sep 10 04:51:47.755620 kernel: CPU features: detected: RAS Extension Support Sep 10 04:51:47.755628 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 04:51:47.755635 kernel: alternatives: applying system-wide alternatives Sep 10 04:51:47.755642 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 10 04:51:47.755649 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 10 04:51:47.755656 kernel: devtmpfs: initialized Sep 10 04:51:47.755663 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 04:51:47.755670 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 04:51:47.755677 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 04:51:47.755685 kernel: 0 pages in range for non-PLT usage Sep 10 04:51:47.755692 kernel: 508560 pages in range for PLT usage Sep 10 04:51:47.755699 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 04:51:47.755706 kernel: SMBIOS 3.0.0 present. Sep 10 04:51:47.755712 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 10 04:51:47.755719 kernel: DMI: Memory slots populated: 1/1 Sep 10 04:51:47.755726 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 04:51:47.755733 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 04:51:47.755740 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 04:51:47.755748 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 04:51:47.755755 kernel: audit: initializing netlink subsys (disabled) Sep 10 04:51:47.755762 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Sep 10 04:51:47.755769 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 04:51:47.755775 kernel: cpuidle: using governor menu Sep 10 04:51:47.755782 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 04:51:47.755789 kernel: ASID allocator initialised with 32768 entries Sep 10 04:51:47.755796 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 04:51:47.755803 kernel: Serial: AMBA PL011 UART driver Sep 10 04:51:47.755811 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 04:51:47.755818 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 04:51:47.755825 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 04:51:47.755832 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 04:51:47.755839 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 04:51:47.755846 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 04:51:47.755852 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 04:51:47.755859 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 04:51:47.755866 kernel: ACPI: Added _OSI(Module Device) Sep 10 04:51:47.755876 kernel: ACPI: Added _OSI(Processor Device) Sep 10 04:51:47.755886 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 04:51:47.755893 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 04:51:47.755900 kernel: ACPI: Interpreter enabled Sep 10 04:51:47.755906 kernel: ACPI: Using GIC for interrupt routing Sep 10 04:51:47.755913 kernel: ACPI: MCFG table detected, 1 entries Sep 10 04:51:47.755921 kernel: ACPI: CPU0 has been hot-added Sep 10 04:51:47.755934 kernel: ACPI: CPU1 has been hot-added Sep 10 04:51:47.755941 kernel: ACPI: CPU2 has been hot-added Sep 10 04:51:47.755948 kernel: ACPI: CPU3 has been hot-added Sep 10 04:51:47.755959 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 04:51:47.755967 kernel: printk: legacy console [ttyAMA0] enabled Sep 10 04:51:47.755974 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 04:51:47.756122 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 04:51:47.756226 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 04:51:47.756311 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 04:51:47.756371 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 04:51:47.756431 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 04:51:47.756440 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 04:51:47.756447 kernel: PCI host bridge to bus 0000:00 Sep 10 04:51:47.756510 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 04:51:47.756563 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 04:51:47.756613 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 04:51:47.756663 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 04:51:47.756737 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 10 04:51:47.756807 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 04:51:47.756866 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 10 04:51:47.756953 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 10 04:51:47.757015 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 04:51:47.757073 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 10 04:51:47.757132 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 10 04:51:47.757196 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 10 04:51:47.757260 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 04:51:47.757330 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 04:51:47.757384 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 04:51:47.757393 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 04:51:47.757401 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 04:51:47.757408 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 04:51:47.757416 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 04:51:47.757423 kernel: iommu: Default domain type: Translated Sep 10 04:51:47.757430 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 04:51:47.757437 kernel: efivars: Registered efivars operations Sep 10 04:51:47.757444 kernel: vgaarb: loaded Sep 10 04:51:47.757451 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 04:51:47.757458 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 04:51:47.757465 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 04:51:47.757471 kernel: pnp: PnP ACPI init Sep 10 04:51:47.757535 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 04:51:47.757545 kernel: pnp: PnP ACPI: found 1 devices Sep 10 04:51:47.757552 kernel: NET: Registered PF_INET protocol family Sep 10 04:51:47.757559 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 04:51:47.757566 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 04:51:47.757573 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 04:51:47.757580 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 04:51:47.757587 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 04:51:47.757595 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 04:51:47.757602 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 04:51:47.757609 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 04:51:47.757616 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 04:51:47.757623 kernel: PCI: CLS 0 bytes, default 64 Sep 10 04:51:47.757630 kernel: kvm [1]: HYP mode not available Sep 10 04:51:47.757637 kernel: Initialise system trusted keyrings Sep 10 04:51:47.757643 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 04:51:47.757650 kernel: Key type asymmetric registered Sep 10 04:51:47.757657 kernel: Asymmetric key parser 'x509' registered Sep 10 04:51:47.757665 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 04:51:47.757672 kernel: io scheduler mq-deadline registered Sep 10 04:51:47.757679 kernel: io scheduler kyber registered Sep 10 04:51:47.757686 kernel: io scheduler bfq registered Sep 10 04:51:47.757693 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 04:51:47.757700 kernel: ACPI: button: Power Button [PWRB] Sep 10 04:51:47.757707 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 04:51:47.757764 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 04:51:47.757773 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 04:51:47.757781 kernel: thunder_xcv, ver 1.0 Sep 10 04:51:47.757788 kernel: thunder_bgx, ver 1.0 Sep 10 04:51:47.757795 kernel: nicpf, ver 1.0 Sep 10 04:51:47.757802 kernel: nicvf, ver 1.0 Sep 10 04:51:47.757866 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 04:51:47.757988 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T04:51:47 UTC (1757479907) Sep 10 04:51:47.758000 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 04:51:47.758007 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 10 04:51:47.758016 kernel: watchdog: NMI not fully supported Sep 10 04:51:47.758023 kernel: watchdog: Hard watchdog permanently disabled Sep 10 04:51:47.758030 kernel: NET: Registered PF_INET6 protocol family Sep 10 04:51:47.758036 kernel: Segment Routing with IPv6 Sep 10 04:51:47.758043 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 04:51:47.758050 kernel: NET: Registered PF_PACKET protocol family Sep 10 04:51:47.758057 kernel: Key type dns_resolver registered Sep 10 04:51:47.758064 kernel: registered taskstats version 1 Sep 10 04:51:47.758075 kernel: Loading compiled-in X.509 certificates Sep 10 04:51:47.758085 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: ff30d9664cb85a7bf5140c28bc9be2659edf9859' Sep 10 04:51:47.758096 kernel: Demotion targets for Node 0: null Sep 10 04:51:47.758104 kernel: Key type .fscrypt registered Sep 10 04:51:47.758111 kernel: Key type fscrypt-provisioning registered Sep 10 04:51:47.758119 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 04:51:47.758125 kernel: ima: Allocated hash algorithm: sha1 Sep 10 04:51:47.758132 kernel: ima: No architecture policies found Sep 10 04:51:47.758139 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 04:51:47.758147 kernel: clk: Disabling unused clocks Sep 10 04:51:47.758154 kernel: PM: genpd: Disabling unused power domains Sep 10 04:51:47.758161 kernel: Warning: unable to open an initial console. Sep 10 04:51:47.758168 kernel: Freeing unused kernel memory: 38976K Sep 10 04:51:47.758175 kernel: Run /init as init process Sep 10 04:51:47.758182 kernel: with arguments: Sep 10 04:51:47.758189 kernel: /init Sep 10 04:51:47.758195 kernel: with environment: Sep 10 04:51:47.758202 kernel: HOME=/ Sep 10 04:51:47.758209 kernel: TERM=linux Sep 10 04:51:47.758217 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 04:51:47.758224 systemd[1]: Successfully made /usr/ read-only. Sep 10 04:51:47.758234 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 04:51:47.758242 systemd[1]: Detected virtualization kvm. Sep 10 04:51:47.758250 systemd[1]: Detected architecture arm64. Sep 10 04:51:47.758256 systemd[1]: Running in initrd. Sep 10 04:51:47.758264 systemd[1]: No hostname configured, using default hostname. Sep 10 04:51:47.758273 systemd[1]: Hostname set to . Sep 10 04:51:47.758280 systemd[1]: Initializing machine ID from VM UUID. Sep 10 04:51:47.758287 systemd[1]: Queued start job for default target initrd.target. Sep 10 04:51:47.758295 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 04:51:47.758302 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 04:51:47.758310 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 04:51:47.758318 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 04:51:47.758325 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 04:51:47.758335 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 04:51:47.758343 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 04:51:47.758351 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 04:51:47.758358 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 04:51:47.758366 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 04:51:47.758373 systemd[1]: Reached target paths.target - Path Units. Sep 10 04:51:47.758380 systemd[1]: Reached target slices.target - Slice Units. Sep 10 04:51:47.758389 systemd[1]: Reached target swap.target - Swaps. Sep 10 04:51:47.758396 systemd[1]: Reached target timers.target - Timer Units. Sep 10 04:51:47.758403 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 04:51:47.758411 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 04:51:47.758419 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 04:51:47.758426 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 04:51:47.758433 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 04:51:47.758441 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 04:51:47.758450 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 04:51:47.758457 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 04:51:47.758464 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 04:51:47.758472 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 04:51:47.758479 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 04:51:47.758487 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 04:51:47.758494 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 04:51:47.758502 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 04:51:47.758509 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 04:51:47.758517 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 04:51:47.758525 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 04:51:47.758533 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 04:51:47.758540 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 04:51:47.758549 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 04:51:47.758572 systemd-journald[244]: Collecting audit messages is disabled. Sep 10 04:51:47.758591 systemd-journald[244]: Journal started Sep 10 04:51:47.758609 systemd-journald[244]: Runtime Journal (/run/log/journal/7f109bfd289b4fd9b38f2daf7c0d1b7d) is 6M, max 48.5M, 42.4M free. Sep 10 04:51:47.756247 systemd-modules-load[245]: Inserted module 'overlay' Sep 10 04:51:47.760365 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 04:51:47.762563 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 04:51:47.764481 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 04:51:47.766046 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 04:51:47.770887 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 04:51:47.767782 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 04:51:47.773205 kernel: Bridge firewalling registered Sep 10 04:51:47.770743 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 04:51:47.772353 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 10 04:51:47.780275 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 04:51:47.782815 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 04:51:47.787226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 04:51:47.789620 systemd-tmpfiles[263]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 04:51:47.792128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 04:51:47.794861 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 04:51:47.796724 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 04:51:47.798945 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 04:51:47.800933 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 04:51:47.827814 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc557398806956d5b7cf8f58d9bd1545e6d9edee390c62eb4b21701fba26a284 Sep 10 04:51:47.841126 systemd-resolved[290]: Positive Trust Anchors: Sep 10 04:51:47.841144 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 04:51:47.841174 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 04:51:47.845718 systemd-resolved[290]: Defaulting to hostname 'linux'. Sep 10 04:51:47.846598 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 04:51:47.849170 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 04:51:47.895960 kernel: SCSI subsystem initialized Sep 10 04:51:47.899942 kernel: Loading iSCSI transport class v2.0-870. Sep 10 04:51:47.907962 kernel: iscsi: registered transport (tcp) Sep 10 04:51:47.919951 kernel: iscsi: registered transport (qla4xxx) Sep 10 04:51:47.919973 kernel: QLogic iSCSI HBA Driver Sep 10 04:51:47.934612 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 04:51:47.954967 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 04:51:47.956661 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 04:51:47.998694 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 04:51:48.000669 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 04:51:48.058961 kernel: raid6: neonx8 gen() 15808 MB/s Sep 10 04:51:48.075951 kernel: raid6: neonx4 gen() 15837 MB/s Sep 10 04:51:48.092948 kernel: raid6: neonx2 gen() 13217 MB/s Sep 10 04:51:48.109952 kernel: raid6: neonx1 gen() 10458 MB/s Sep 10 04:51:48.126944 kernel: raid6: int64x8 gen() 6906 MB/s Sep 10 04:51:48.143955 kernel: raid6: int64x4 gen() 7360 MB/s Sep 10 04:51:48.160963 kernel: raid6: int64x2 gen() 6106 MB/s Sep 10 04:51:48.177946 kernel: raid6: int64x1 gen() 5049 MB/s Sep 10 04:51:48.177968 kernel: raid6: using algorithm neonx4 gen() 15837 MB/s Sep 10 04:51:48.194962 kernel: raid6: .... xor() 12346 MB/s, rmw enabled Sep 10 04:51:48.194989 kernel: raid6: using neon recovery algorithm Sep 10 04:51:48.200350 kernel: xor: measuring software checksum speed Sep 10 04:51:48.200363 kernel: 8regs : 21618 MB/sec Sep 10 04:51:48.200372 kernel: 32regs : 21287 MB/sec Sep 10 04:51:48.201329 kernel: arm64_neon : 28089 MB/sec Sep 10 04:51:48.201341 kernel: xor: using function: arm64_neon (28089 MB/sec) Sep 10 04:51:48.252958 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 04:51:48.258571 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 04:51:48.260745 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 04:51:48.289766 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 10 04:51:48.293830 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 04:51:48.295489 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 04:51:48.318739 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 10 04:51:48.340075 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 04:51:48.341685 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 04:51:48.403768 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 04:51:48.406051 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 04:51:48.448945 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 04:51:48.449645 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 04:51:48.458359 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 04:51:48.458376 kernel: GPT:9289727 != 19775487 Sep 10 04:51:48.458385 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 04:51:48.459015 kernel: GPT:9289727 != 19775487 Sep 10 04:51:48.459040 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 04:51:48.459994 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 04:51:48.467585 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 04:51:48.469526 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 04:51:48.472302 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 04:51:48.478124 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 04:51:48.504345 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 04:51:48.505499 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 04:51:48.513432 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 04:51:48.514649 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 04:51:48.527753 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 04:51:48.528788 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 04:51:48.536781 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 04:51:48.538664 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 04:51:48.539629 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 04:51:48.541264 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 04:51:48.543417 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 04:51:48.544985 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 04:51:48.562453 disk-uuid[595]: Primary Header is updated. Sep 10 04:51:48.562453 disk-uuid[595]: Secondary Entries is updated. Sep 10 04:51:48.562453 disk-uuid[595]: Secondary Header is updated. Sep 10 04:51:48.565315 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 04:51:48.567961 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 04:51:49.573554 disk-uuid[599]: The operation has completed successfully. Sep 10 04:51:49.575465 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 04:51:49.600778 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 04:51:49.600893 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 04:51:49.624258 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 04:51:49.645706 sh[614]: Success Sep 10 04:51:49.657356 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 04:51:49.657394 kernel: device-mapper: uevent: version 1.0.3 Sep 10 04:51:49.658218 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 04:51:49.667961 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 04:51:49.688663 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 04:51:49.691151 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 04:51:49.707915 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 04:51:49.712943 kernel: BTRFS: device fsid e05b724d-13f9-4a54-9e36-10f4d8a13534 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (626) Sep 10 04:51:49.712976 kernel: BTRFS info (device dm-0): first mount of filesystem e05b724d-13f9-4a54-9e36-10f4d8a13534 Sep 10 04:51:49.714542 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 04:51:49.718079 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 04:51:49.718095 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 04:51:49.719103 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 04:51:49.720131 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 04:51:49.721310 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 04:51:49.721986 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 04:51:49.724479 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 04:51:49.752959 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (659) Sep 10 04:51:49.752998 kernel: BTRFS info (device vda6): first mount of filesystem 84f428ea-8f44-4c4f-b790-f94c8939540a Sep 10 04:51:49.753010 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 04:51:49.755953 kernel: BTRFS info (device vda6): turning on async discard Sep 10 04:51:49.755982 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 04:51:49.760009 kernel: BTRFS info (device vda6): last unmount of filesystem 84f428ea-8f44-4c4f-b790-f94c8939540a Sep 10 04:51:49.760235 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 04:51:49.762153 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 04:51:49.823107 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 04:51:49.825978 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 04:51:49.860895 systemd-networkd[801]: lo: Link UP Sep 10 04:51:49.860906 systemd-networkd[801]: lo: Gained carrier Sep 10 04:51:49.861595 systemd-networkd[801]: Enumeration completed Sep 10 04:51:49.861691 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 04:51:49.862393 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 04:51:49.862397 systemd-networkd[801]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 04:51:49.863133 systemd-networkd[801]: eth0: Link UP Sep 10 04:51:49.867156 ignition[702]: Ignition 2.22.0 Sep 10 04:51:49.863239 systemd-networkd[801]: eth0: Gained carrier Sep 10 04:51:49.867163 ignition[702]: Stage: fetch-offline Sep 10 04:51:49.863248 systemd-networkd[801]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 04:51:49.867189 ignition[702]: no configs at "/usr/lib/ignition/base.d" Sep 10 04:51:49.863322 systemd[1]: Reached target network.target - Network. Sep 10 04:51:49.867196 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:51:49.867275 ignition[702]: parsed url from cmdline: "" Sep 10 04:51:49.867278 ignition[702]: no config URL provided Sep 10 04:51:49.867282 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 04:51:49.867288 ignition[702]: no config at "/usr/lib/ignition/user.ign" Sep 10 04:51:49.867305 ignition[702]: op(1): [started] loading QEMU firmware config module Sep 10 04:51:49.867309 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 04:51:49.877794 ignition[702]: op(1): [finished] loading QEMU firmware config module Sep 10 04:51:49.877818 ignition[702]: QEMU firmware config was not found. Ignoring... Sep 10 04:51:49.886988 systemd-networkd[801]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 04:51:49.919777 ignition[702]: parsing config with SHA512: a69915a4d635297091a35465f094f71d1a1fc4b3a7787afc0a43be8dd90c5b4839234f55cfb18d2ed5633cff2501eb812575b31275a55993fd7e423c5c05eef3 Sep 10 04:51:49.925884 unknown[702]: fetched base config from "system" Sep 10 04:51:49.925897 unknown[702]: fetched user config from "qemu" Sep 10 04:51:49.926276 ignition[702]: fetch-offline: fetch-offline passed Sep 10 04:51:49.926331 ignition[702]: Ignition finished successfully Sep 10 04:51:49.928621 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 04:51:49.930048 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 04:51:49.930774 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 04:51:49.963260 ignition[814]: Ignition 2.22.0 Sep 10 04:51:49.963279 ignition[814]: Stage: kargs Sep 10 04:51:49.963399 ignition[814]: no configs at "/usr/lib/ignition/base.d" Sep 10 04:51:49.963408 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:51:49.964139 ignition[814]: kargs: kargs passed Sep 10 04:51:49.964180 ignition[814]: Ignition finished successfully Sep 10 04:51:49.967677 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 04:51:49.970064 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 04:51:49.995211 ignition[823]: Ignition 2.22.0 Sep 10 04:51:49.995231 ignition[823]: Stage: disks Sep 10 04:51:49.995352 ignition[823]: no configs at "/usr/lib/ignition/base.d" Sep 10 04:51:49.995361 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:51:49.997675 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 04:51:49.996087 ignition[823]: disks: disks passed Sep 10 04:51:49.999501 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 04:51:49.996128 ignition[823]: Ignition finished successfully Sep 10 04:51:50.000601 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 04:51:50.001968 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 04:51:50.003457 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 04:51:50.004759 systemd[1]: Reached target basic.target - Basic System. Sep 10 04:51:50.007232 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 04:51:50.027552 systemd-fsck[833]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 04:51:50.031876 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 04:51:50.033741 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 04:51:50.091758 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 04:51:50.092983 kernel: EXT4-fs (vda9): mounted filesystem dd8d03e8-6691-4f3c-8fb2-6f7ae674fb2f r/w with ordered data mode. Quota mode: none. Sep 10 04:51:50.092832 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 04:51:50.094891 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 04:51:50.096395 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 04:51:50.097216 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 04:51:50.097253 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 04:51:50.097274 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 04:51:50.114304 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 04:51:50.116577 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 04:51:50.118950 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Sep 10 04:51:50.118974 kernel: BTRFS info (device vda6): first mount of filesystem 84f428ea-8f44-4c4f-b790-f94c8939540a Sep 10 04:51:50.118984 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 04:51:50.122938 kernel: BTRFS info (device vda6): turning on async discard Sep 10 04:51:50.122971 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 04:51:50.123657 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 04:51:50.150289 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 04:51:50.153420 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Sep 10 04:51:50.156576 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 04:51:50.160213 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 04:51:50.222480 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 04:51:50.224516 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 04:51:50.225886 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 04:51:50.238959 kernel: BTRFS info (device vda6): last unmount of filesystem 84f428ea-8f44-4c4f-b790-f94c8939540a Sep 10 04:51:50.248982 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 04:51:50.259231 ignition[955]: INFO : Ignition 2.22.0 Sep 10 04:51:50.259231 ignition[955]: INFO : Stage: mount Sep 10 04:51:50.261014 ignition[955]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 04:51:50.261014 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:51:50.261014 ignition[955]: INFO : mount: mount passed Sep 10 04:51:50.261014 ignition[955]: INFO : Ignition finished successfully Sep 10 04:51:50.261988 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 04:51:50.264305 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 04:51:50.839915 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 04:51:50.841492 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 04:51:50.871101 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Sep 10 04:51:50.871144 kernel: BTRFS info (device vda6): first mount of filesystem 84f428ea-8f44-4c4f-b790-f94c8939540a Sep 10 04:51:50.871155 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 04:51:50.874009 kernel: BTRFS info (device vda6): turning on async discard Sep 10 04:51:50.874032 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 04:51:50.875400 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 04:51:50.902448 ignition[985]: INFO : Ignition 2.22.0 Sep 10 04:51:50.902448 ignition[985]: INFO : Stage: files Sep 10 04:51:50.903924 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 04:51:50.903924 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:51:50.903924 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Sep 10 04:51:50.906960 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 04:51:50.906960 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 04:51:50.909169 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 04:51:50.909169 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 04:51:50.909169 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 04:51:50.908851 unknown[985]: wrote ssh authorized keys file for user: core Sep 10 04:51:50.913738 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 04:51:50.913738 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 10 04:51:51.415237 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 04:51:51.683301 systemd-networkd[801]: eth0: Gained IPv6LL Sep 10 04:51:51.707826 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 04:51:51.707826 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 04:51:51.711086 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 04:51:51.711086 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 04:51:51.711086 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 04:51:51.711086 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 04:51:51.711086 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 04:51:51.711086 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 04:51:51.711086 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 04:51:51.721220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 04:51:51.721220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 04:51:51.721220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 04:51:51.721220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 04:51:51.721220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 04:51:51.721220 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 10 04:51:52.125918 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 04:51:52.920496 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 04:51:52.920496 ignition[985]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 04:51:52.923457 ignition[985]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 04:51:52.925334 ignition[985]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 04:51:52.925334 ignition[985]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 04:51:52.925334 ignition[985]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 04:51:52.925334 ignition[985]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 04:51:52.930824 ignition[985]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 04:51:52.930824 ignition[985]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 04:51:52.930824 ignition[985]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 04:51:52.938991 ignition[985]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 04:51:52.942666 ignition[985]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 04:51:52.944002 ignition[985]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 04:51:52.944002 ignition[985]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 04:51:52.944002 ignition[985]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 04:51:52.944002 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 04:51:52.944002 ignition[985]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 04:51:52.944002 ignition[985]: INFO : files: files passed Sep 10 04:51:52.944002 ignition[985]: INFO : Ignition finished successfully Sep 10 04:51:52.948992 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 04:51:52.953060 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 04:51:52.954906 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 04:51:52.972088 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 04:51:52.972222 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 04:51:52.974852 initrd-setup-root-after-ignition[1015]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 04:51:52.975937 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 04:51:52.975937 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 04:51:52.978274 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 04:51:52.977625 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 04:51:52.979495 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 04:51:52.982208 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 04:51:53.015424 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 04:51:53.015569 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 04:51:53.017521 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 04:51:53.019117 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 04:51:53.020708 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 04:51:53.021528 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 04:51:53.048241 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 04:51:53.050586 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 04:51:53.069878 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 04:51:53.071937 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 04:51:53.073011 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 04:51:53.074671 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 04:51:53.074798 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 04:51:53.077077 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 04:51:53.078849 systemd[1]: Stopped target basic.target - Basic System. Sep 10 04:51:53.080477 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 04:51:53.081856 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 04:51:53.083559 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 04:51:53.085300 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 04:51:53.086978 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 04:51:53.088782 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 04:51:53.090541 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 04:51:53.092250 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 04:51:53.093779 systemd[1]: Stopped target swap.target - Swaps. Sep 10 04:51:53.095122 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 04:51:53.095252 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 04:51:53.097286 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 04:51:53.098881 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 04:51:53.100664 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 04:51:53.104008 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 04:51:53.106043 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 04:51:53.106160 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 04:51:53.108610 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 04:51:53.108732 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 04:51:53.110578 systemd[1]: Stopped target paths.target - Path Units. Sep 10 04:51:53.111972 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 04:51:53.115986 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 04:51:53.117093 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 04:51:53.118885 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 04:51:53.120219 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 04:51:53.120303 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 04:51:53.121659 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 04:51:53.121731 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 04:51:53.122981 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 04:51:53.123093 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 04:51:53.124762 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 04:51:53.124870 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 04:51:53.126956 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 04:51:53.128386 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 04:51:53.128512 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 04:51:53.148360 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 04:51:53.149106 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 04:51:53.149235 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 04:51:53.151000 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 04:51:53.151106 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 04:51:53.156810 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 04:51:53.156917 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 04:51:53.164095 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 04:51:53.165316 ignition[1041]: INFO : Ignition 2.22.0 Sep 10 04:51:53.165316 ignition[1041]: INFO : Stage: umount Sep 10 04:51:53.166680 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 04:51:53.166680 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 04:51:53.166680 ignition[1041]: INFO : umount: umount passed Sep 10 04:51:53.166680 ignition[1041]: INFO : Ignition finished successfully Sep 10 04:51:53.168565 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 04:51:53.168681 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 04:51:53.169969 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 04:51:53.170045 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 04:51:53.171591 systemd[1]: Stopped target network.target - Network. Sep 10 04:51:53.172610 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 04:51:53.172676 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 04:51:53.174004 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 04:51:53.174045 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 04:51:53.175409 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 04:51:53.175455 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 04:51:53.176760 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 04:51:53.176797 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 04:51:53.178438 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 04:51:53.178491 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 04:51:53.180143 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 04:51:53.181537 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 04:51:53.190668 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 04:51:53.190782 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 04:51:53.195303 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 04:51:53.195552 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 04:51:53.195685 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 04:51:53.198738 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 04:51:53.199363 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 04:51:53.200755 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 04:51:53.200797 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 04:51:53.203282 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 04:51:53.204633 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 04:51:53.204693 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 04:51:53.206381 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 04:51:53.206422 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 04:51:53.208814 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 04:51:53.208865 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 04:51:53.210450 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 04:51:53.210492 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 04:51:53.214092 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 04:51:53.217829 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 04:51:53.217908 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 04:51:53.229849 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 04:51:53.236132 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 04:51:53.237420 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 04:51:53.237458 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 04:51:53.238900 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 04:51:53.238941 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 04:51:53.240487 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 04:51:53.240532 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 04:51:53.242642 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 04:51:53.242686 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 04:51:53.244715 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 04:51:53.244761 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 04:51:53.248061 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 04:51:53.249810 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 04:51:53.249879 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 04:51:53.252978 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 04:51:53.253026 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 04:51:53.255918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 04:51:53.255970 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 04:51:53.259653 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 10 04:51:53.259704 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 10 04:51:53.259738 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 04:51:53.260059 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 04:51:53.263032 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 04:51:53.268390 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 04:51:53.268507 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 04:51:53.270503 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 04:51:53.272736 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 04:51:53.302312 systemd[1]: Switching root. Sep 10 04:51:53.348942 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 10 04:51:53.348995 systemd-journald[244]: Journal stopped Sep 10 04:51:54.083159 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 04:51:54.083213 kernel: SELinux: policy capability open_perms=1 Sep 10 04:51:54.083229 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 04:51:54.083238 kernel: SELinux: policy capability always_check_network=0 Sep 10 04:51:54.083250 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 04:51:54.083259 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 04:51:54.083270 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 04:51:54.083279 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 04:51:54.083290 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 04:51:54.083299 kernel: audit: type=1403 audit(1757479913.512:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 04:51:54.083310 systemd[1]: Successfully loaded SELinux policy in 58.570ms. Sep 10 04:51:54.083329 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.422ms. Sep 10 04:51:54.083340 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 04:51:54.083351 systemd[1]: Detected virtualization kvm. Sep 10 04:51:54.083361 systemd[1]: Detected architecture arm64. Sep 10 04:51:54.083370 systemd[1]: Detected first boot. Sep 10 04:51:54.083380 systemd[1]: Initializing machine ID from VM UUID. Sep 10 04:51:54.083392 zram_generator::config[1088]: No configuration found. Sep 10 04:51:54.083403 kernel: NET: Registered PF_VSOCK protocol family Sep 10 04:51:54.083412 systemd[1]: Populated /etc with preset unit settings. Sep 10 04:51:54.083423 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 04:51:54.083432 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 04:51:54.083442 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 04:51:54.083452 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 04:51:54.083462 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 04:51:54.083476 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 04:51:54.083487 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 04:51:54.083497 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 04:51:54.083507 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 04:51:54.083521 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 04:51:54.083531 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 04:51:54.083541 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 04:51:54.083610 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 04:51:54.083620 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 04:51:54.083634 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 04:51:54.083654 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 04:51:54.083664 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 04:51:54.083675 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 04:51:54.083685 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 04:51:54.083696 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 04:51:54.083706 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 04:51:54.083717 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 04:51:54.083730 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 04:51:54.083741 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 04:51:54.083751 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 04:51:54.083761 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 04:51:54.083770 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 04:51:54.083781 systemd[1]: Reached target slices.target - Slice Units. Sep 10 04:51:54.083792 systemd[1]: Reached target swap.target - Swaps. Sep 10 04:51:54.083802 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 04:51:54.083812 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 04:51:54.083824 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 04:51:54.083841 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 04:51:54.083853 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 04:51:54.083864 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 04:51:54.083874 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 04:51:54.083885 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 04:51:54.083896 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 04:51:54.083906 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 04:51:54.083916 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 04:51:54.084017 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 04:51:54.084031 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 04:51:54.084043 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 04:51:54.084055 systemd[1]: Reached target machines.target - Containers. Sep 10 04:51:54.084065 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 04:51:54.084076 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 04:51:54.084086 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 04:51:54.084096 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 04:51:54.084109 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 04:51:54.084119 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 04:51:54.084131 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 04:51:54.084142 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 04:51:54.084152 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 04:51:54.084163 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 04:51:54.084173 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 04:51:54.084184 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 04:51:54.084195 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 04:51:54.084205 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 04:51:54.084216 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 04:51:54.084226 kernel: fuse: init (API version 7.41) Sep 10 04:51:54.084236 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 04:51:54.084246 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 04:51:54.084256 kernel: ACPI: bus type drm_connector registered Sep 10 04:51:54.084266 kernel: loop: module loaded Sep 10 04:51:54.084276 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 04:51:54.084294 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 04:51:54.084318 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 04:51:54.084328 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 04:51:54.084339 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 04:51:54.084348 systemd[1]: Stopped verity-setup.service. Sep 10 04:51:54.084360 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 04:51:54.084370 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 04:51:54.084380 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 04:51:54.084389 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 04:51:54.084399 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 04:51:54.084410 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 04:51:54.084445 systemd-journald[1156]: Collecting audit messages is disabled. Sep 10 04:51:54.084469 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 04:51:54.084479 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 04:51:54.084491 systemd-journald[1156]: Journal started Sep 10 04:51:54.084510 systemd-journald[1156]: Runtime Journal (/run/log/journal/7f109bfd289b4fd9b38f2daf7c0d1b7d) is 6M, max 48.5M, 42.4M free. Sep 10 04:51:53.879154 systemd[1]: Queued start job for default target multi-user.target. Sep 10 04:51:53.902011 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 04:51:53.902400 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 04:51:54.088945 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 04:51:54.089464 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 04:51:54.089633 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 04:51:54.090850 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 04:51:54.091063 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 04:51:54.092151 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 04:51:54.092307 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 04:51:54.093382 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 04:51:54.093543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 04:51:54.094753 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 04:51:54.095140 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 04:51:54.096224 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 04:51:54.096380 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 04:51:54.097704 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 04:51:54.099014 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 04:51:54.100271 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 04:51:54.101714 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 04:51:54.114062 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 04:51:54.116158 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 04:51:54.117827 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 04:51:54.118882 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 04:51:54.118913 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 04:51:54.120555 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 04:51:54.134707 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 04:51:54.135786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 04:51:54.137231 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 04:51:54.138885 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 04:51:54.139935 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 04:51:54.143057 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 04:51:54.144015 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 04:51:54.146190 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 04:51:54.147916 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 04:51:54.149610 systemd-journald[1156]: Time spent on flushing to /var/log/journal/7f109bfd289b4fd9b38f2daf7c0d1b7d is 30.289ms for 888 entries. Sep 10 04:51:54.149610 systemd-journald[1156]: System Journal (/var/log/journal/7f109bfd289b4fd9b38f2daf7c0d1b7d) is 8M, max 195.6M, 187.6M free. Sep 10 04:51:54.190142 systemd-journald[1156]: Received client request to flush runtime journal. Sep 10 04:51:54.190205 kernel: loop0: detected capacity change from 0 to 100632 Sep 10 04:51:54.190220 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 04:51:54.150119 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 04:51:54.161132 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 04:51:54.163221 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 04:51:54.164724 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 04:51:54.176470 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 04:51:54.179286 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 04:51:54.181922 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 04:51:54.186297 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 04:51:54.193272 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 04:51:54.201870 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 04:51:54.204654 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 04:51:54.212952 kernel: loop1: detected capacity change from 0 to 119368 Sep 10 04:51:54.219110 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 04:51:54.233303 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 10 04:51:54.233323 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Sep 10 04:51:54.236887 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 04:51:54.245959 kernel: loop2: detected capacity change from 0 to 211168 Sep 10 04:51:54.280948 kernel: loop3: detected capacity change from 0 to 100632 Sep 10 04:51:54.285956 kernel: loop4: detected capacity change from 0 to 119368 Sep 10 04:51:54.290952 kernel: loop5: detected capacity change from 0 to 211168 Sep 10 04:51:54.294321 (sd-merge)[1228]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 04:51:54.294706 (sd-merge)[1228]: Merged extensions into '/usr'. Sep 10 04:51:54.300081 systemd[1]: Reload requested from client PID 1205 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 04:51:54.300101 systemd[1]: Reloading... Sep 10 04:51:54.354957 zram_generator::config[1253]: No configuration found. Sep 10 04:51:54.418668 ldconfig[1200]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 04:51:54.501190 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 04:51:54.501618 systemd[1]: Reloading finished in 201 ms. Sep 10 04:51:54.523945 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 04:51:54.525190 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 04:51:54.536199 systemd[1]: Starting ensure-sysext.service... Sep 10 04:51:54.537843 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 04:51:54.546838 systemd[1]: Reload requested from client PID 1288 ('systemctl') (unit ensure-sysext.service)... Sep 10 04:51:54.546853 systemd[1]: Reloading... Sep 10 04:51:54.550667 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 04:51:54.550700 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 04:51:54.550962 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 04:51:54.551169 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 04:51:54.551868 systemd-tmpfiles[1289]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 04:51:54.552115 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Sep 10 04:51:54.552172 systemd-tmpfiles[1289]: ACLs are not supported, ignoring. Sep 10 04:51:54.554747 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 04:51:54.554760 systemd-tmpfiles[1289]: Skipping /boot Sep 10 04:51:54.560591 systemd-tmpfiles[1289]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 04:51:54.560606 systemd-tmpfiles[1289]: Skipping /boot Sep 10 04:51:54.594971 zram_generator::config[1316]: No configuration found. Sep 10 04:51:54.725774 systemd[1]: Reloading finished in 178 ms. Sep 10 04:51:54.744446 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 04:51:54.751997 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 04:51:54.761954 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 04:51:54.764038 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 04:51:54.766019 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 04:51:54.768537 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 04:51:54.773068 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 04:51:54.775087 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 04:51:54.781514 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 04:51:54.783050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 04:51:54.784917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 04:51:54.786916 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 04:51:54.788126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 04:51:54.788242 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 04:51:54.793196 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 04:51:54.793384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 04:51:54.793505 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 04:51:54.796102 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 04:51:54.798981 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 04:51:54.800600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 04:51:54.800764 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 04:51:54.802163 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 04:51:54.802307 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 04:51:54.813004 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Sep 10 04:51:54.815490 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 04:51:54.817350 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 04:51:54.821604 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 04:51:54.821764 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 04:51:54.826945 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 04:51:54.827772 augenrules[1385]: No rules Sep 10 04:51:54.828116 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 04:51:54.832037 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 04:51:54.835177 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 04:51:54.836103 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 04:51:54.836148 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 04:51:54.836183 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 04:51:54.837453 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 04:51:54.838702 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 04:51:54.839412 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 04:51:54.853361 systemd[1]: Finished ensure-sysext.service. Sep 10 04:51:54.855632 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 04:51:54.855811 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 04:51:54.870530 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 04:51:54.870754 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 04:51:54.875128 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 04:51:54.875289 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 04:51:54.876625 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 04:51:54.876771 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 04:51:54.878998 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 04:51:54.889708 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 04:51:54.891305 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 04:51:54.892902 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 04:51:54.894631 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 04:51:54.921397 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 04:51:54.953209 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 04:51:54.956491 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 04:51:54.984689 systemd-networkd[1432]: lo: Link UP Sep 10 04:51:54.984703 systemd-networkd[1432]: lo: Gained carrier Sep 10 04:51:54.986014 systemd-networkd[1432]: Enumeration completed Sep 10 04:51:54.986263 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 04:51:54.986463 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 04:51:54.986473 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 04:51:54.987030 systemd-networkd[1432]: eth0: Link UP Sep 10 04:51:54.987133 systemd-networkd[1432]: eth0: Gained carrier Sep 10 04:51:54.987152 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 04:51:54.990120 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 04:51:54.995877 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 04:51:55.000077 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 04:51:55.001116 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 04:51:55.001524 systemd-resolved[1355]: Positive Trust Anchors: Sep 10 04:51:55.001542 systemd-resolved[1355]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 04:51:55.001578 systemd-resolved[1355]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 04:51:55.002010 systemd-networkd[1432]: eth0: DHCPv4 address 10.0.0.60/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 04:51:55.003126 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 04:51:55.004487 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Sep 10 04:51:55.006248 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 04:51:55.006309 systemd-timesyncd[1433]: Initial clock synchronization to Wed 2025-09-10 04:51:55.061951 UTC. Sep 10 04:51:55.009495 systemd-resolved[1355]: Defaulting to hostname 'linux'. Sep 10 04:51:55.011606 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 04:51:55.012612 systemd[1]: Reached target network.target - Network. Sep 10 04:51:55.013349 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 04:51:55.016026 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 04:51:55.017073 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 04:51:55.018200 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 04:51:55.019676 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 04:51:55.020703 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 04:51:55.021714 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 04:51:55.022701 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 04:51:55.022734 systemd[1]: Reached target paths.target - Path Units. Sep 10 04:51:55.023459 systemd[1]: Reached target timers.target - Timer Units. Sep 10 04:51:55.024887 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 04:51:55.027285 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 04:51:55.029799 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 04:51:55.031260 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 04:51:55.032274 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 04:51:55.035220 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 04:51:55.036309 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 04:51:55.039815 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 04:51:55.041085 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 04:51:55.042344 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 04:51:55.043115 systemd[1]: Reached target basic.target - Basic System. Sep 10 04:51:55.043835 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 04:51:55.043866 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 04:51:55.045109 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 04:51:55.047106 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 04:51:55.048582 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 04:51:55.053168 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 04:51:55.056162 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 04:51:55.059002 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 04:51:55.070123 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 04:51:55.072534 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 04:51:55.074367 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 04:51:55.078043 jq[1466]: false Sep 10 04:51:55.077975 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 04:51:55.081161 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 04:51:55.082797 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 04:51:55.083215 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 04:51:55.084616 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 04:51:55.088885 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 04:51:55.092554 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 04:51:55.093909 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 04:51:55.094192 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 04:51:55.096747 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 04:51:55.097134 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 04:51:55.105789 extend-filesystems[1467]: Found /dev/vda6 Sep 10 04:51:55.110188 jq[1483]: true Sep 10 04:51:55.106743 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 04:51:55.106947 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 04:51:55.114387 extend-filesystems[1467]: Found /dev/vda9 Sep 10 04:51:55.115951 tar[1488]: linux-arm64/LICENSE Sep 10 04:51:55.116541 extend-filesystems[1467]: Checking size of /dev/vda9 Sep 10 04:51:55.118671 tar[1488]: linux-arm64/helm Sep 10 04:51:55.120165 (ntainerd)[1503]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 04:51:55.127318 update_engine[1479]: I20250910 04:51:55.126844 1479 main.cc:92] Flatcar Update Engine starting Sep 10 04:51:55.128948 jq[1502]: true Sep 10 04:51:55.135644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 04:51:55.139673 dbus-daemon[1464]: [system] SELinux support is enabled Sep 10 04:51:55.140919 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 04:51:55.141182 extend-filesystems[1467]: Resized partition /dev/vda9 Sep 10 04:51:55.145241 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 04:51:55.145270 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 04:51:55.146560 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 04:51:55.146583 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 04:51:55.148493 update_engine[1479]: I20250910 04:51:55.148450 1479 update_check_scheduler.cc:74] Next update check in 11m30s Sep 10 04:51:55.149106 extend-filesystems[1513]: resize2fs 1.47.3 (8-Jul-2025) Sep 10 04:51:55.151617 systemd[1]: Started update-engine.service - Update Engine. Sep 10 04:51:55.155967 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 04:51:55.159525 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 04:51:55.176962 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 04:51:55.188216 extend-filesystems[1513]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 04:51:55.188216 extend-filesystems[1513]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 04:51:55.188216 extend-filesystems[1513]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 04:51:55.193238 extend-filesystems[1467]: Resized filesystem in /dev/vda9 Sep 10 04:51:55.189807 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 04:51:55.192383 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 04:51:55.202724 bash[1531]: Updated "/home/core/.ssh/authorized_keys" Sep 10 04:51:55.220587 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 04:51:55.227757 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 04:51:55.231175 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 04:51:55.266510 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 04:51:55.281963 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 04:51:55.283302 systemd-logind[1476]: New seat seat0. Sep 10 04:51:55.285034 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 04:51:55.317908 containerd[1503]: time="2025-09-10T04:51:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 04:51:55.318630 containerd[1503]: time="2025-09-10T04:51:55.318577880Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 10 04:51:55.326918 containerd[1503]: time="2025-09-10T04:51:55.326876320Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.44µs" Sep 10 04:51:55.326918 containerd[1503]: time="2025-09-10T04:51:55.326908080Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 04:51:55.326918 containerd[1503]: time="2025-09-10T04:51:55.326938440Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 04:51:55.327094 containerd[1503]: time="2025-09-10T04:51:55.327074720Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 04:51:55.327131 containerd[1503]: time="2025-09-10T04:51:55.327098600Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 04:51:55.327131 containerd[1503]: time="2025-09-10T04:51:55.327123760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 04:51:55.327190 containerd[1503]: time="2025-09-10T04:51:55.327171680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 04:51:55.327210 containerd[1503]: time="2025-09-10T04:51:55.327196440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 04:51:55.327428 containerd[1503]: time="2025-09-10T04:51:55.327393800Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 04:51:55.327428 containerd[1503]: time="2025-09-10T04:51:55.327414880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 04:51:55.327428 containerd[1503]: time="2025-09-10T04:51:55.327426120Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 04:51:55.327491 containerd[1503]: time="2025-09-10T04:51:55.327434000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 04:51:55.327523 containerd[1503]: time="2025-09-10T04:51:55.327504120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 04:51:55.327722 containerd[1503]: time="2025-09-10T04:51:55.327703240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 04:51:55.327753 containerd[1503]: time="2025-09-10T04:51:55.327738560Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 04:51:55.327773 containerd[1503]: time="2025-09-10T04:51:55.327752320Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 04:51:55.327800 containerd[1503]: time="2025-09-10T04:51:55.327788680Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 04:51:55.328208 containerd[1503]: time="2025-09-10T04:51:55.328186560Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 04:51:55.328288 containerd[1503]: time="2025-09-10T04:51:55.328269680Z" level=info msg="metadata content store policy set" policy=shared Sep 10 04:51:55.331752 containerd[1503]: time="2025-09-10T04:51:55.331723400Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 04:51:55.331807 containerd[1503]: time="2025-09-10T04:51:55.331774160Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 04:51:55.331807 containerd[1503]: time="2025-09-10T04:51:55.331789320Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 04:51:55.331807 containerd[1503]: time="2025-09-10T04:51:55.331801280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 04:51:55.331882 containerd[1503]: time="2025-09-10T04:51:55.331813400Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 04:51:55.331882 containerd[1503]: time="2025-09-10T04:51:55.331835080Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 04:51:55.331882 containerd[1503]: time="2025-09-10T04:51:55.331853480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 04:51:55.331882 containerd[1503]: time="2025-09-10T04:51:55.331868040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 04:51:55.331882 containerd[1503]: time="2025-09-10T04:51:55.331878760Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 04:51:55.331982 containerd[1503]: time="2025-09-10T04:51:55.331888760Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 04:51:55.331982 containerd[1503]: time="2025-09-10T04:51:55.331897720Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 04:51:55.331982 containerd[1503]: time="2025-09-10T04:51:55.331909640Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 04:51:55.332054 containerd[1503]: time="2025-09-10T04:51:55.332031080Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 04:51:55.332078 containerd[1503]: time="2025-09-10T04:51:55.332061560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 04:51:55.332154 containerd[1503]: time="2025-09-10T04:51:55.332078480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 04:51:55.332154 containerd[1503]: time="2025-09-10T04:51:55.332089720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 04:51:55.332154 containerd[1503]: time="2025-09-10T04:51:55.332101840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 04:51:55.332154 containerd[1503]: time="2025-09-10T04:51:55.332112800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 04:51:55.332154 containerd[1503]: time="2025-09-10T04:51:55.332124000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 04:51:55.332154 containerd[1503]: time="2025-09-10T04:51:55.332134000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 04:51:55.332154 containerd[1503]: time="2025-09-10T04:51:55.332144960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 04:51:55.332154 containerd[1503]: time="2025-09-10T04:51:55.332155080Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 04:51:55.332297 containerd[1503]: time="2025-09-10T04:51:55.332165520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 04:51:55.332356 containerd[1503]: time="2025-09-10T04:51:55.332338880Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 04:51:55.332389 containerd[1503]: time="2025-09-10T04:51:55.332358280Z" level=info msg="Start snapshots syncer" Sep 10 04:51:55.332389 containerd[1503]: time="2025-09-10T04:51:55.332382080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 04:51:55.332610 containerd[1503]: time="2025-09-10T04:51:55.332575320Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 04:51:55.332711 containerd[1503]: time="2025-09-10T04:51:55.332624480Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 04:51:55.332711 containerd[1503]: time="2025-09-10T04:51:55.332687920Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 04:51:55.332806 containerd[1503]: time="2025-09-10T04:51:55.332781440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 04:51:55.332844 containerd[1503]: time="2025-09-10T04:51:55.332809840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 04:51:55.332844 containerd[1503]: time="2025-09-10T04:51:55.332833760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333180920Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333213440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333228000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333243360Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333274600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333293240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333310960Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333378120Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333528400Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333550400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333562640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333574920Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333594120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 04:51:55.334238 containerd[1503]: time="2025-09-10T04:51:55.333609360Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 04:51:55.334508 containerd[1503]: time="2025-09-10T04:51:55.333685200Z" level=info msg="runtime interface created" Sep 10 04:51:55.334508 containerd[1503]: time="2025-09-10T04:51:55.333691160Z" level=info msg="created NRI interface" Sep 10 04:51:55.334508 containerd[1503]: time="2025-09-10T04:51:55.333705960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 04:51:55.334508 containerd[1503]: time="2025-09-10T04:51:55.333720040Z" level=info msg="Connect containerd service" Sep 10 04:51:55.334508 containerd[1503]: time="2025-09-10T04:51:55.333763200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 04:51:55.334841 containerd[1503]: time="2025-09-10T04:51:55.334799600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407538960Z" level=info msg="Start subscribing containerd event" Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407613880Z" level=info msg="Start recovering state" Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407712000Z" level=info msg="Start event monitor" Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407727200Z" level=info msg="Start cni network conf syncer for default" Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407739600Z" level=info msg="Start streaming server" Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407748360Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407755440Z" level=info msg="runtime interface starting up..." Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407761720Z" level=info msg="starting plugins..." Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407774240Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407844560Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407896240Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 04:51:55.408010 containerd[1503]: time="2025-09-10T04:51:55.407981280Z" level=info msg="containerd successfully booted in 0.090437s" Sep 10 04:51:55.408084 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 04:51:55.496920 tar[1488]: linux-arm64/README.md Sep 10 04:51:55.511998 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 04:51:56.066979 sshd_keygen[1487]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 04:51:56.086145 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 04:51:56.089218 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 04:51:56.115097 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 04:51:56.115289 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 04:51:56.117510 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 04:51:56.141481 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 04:51:56.143963 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 04:51:56.145825 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 04:51:56.147012 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 04:51:56.163162 systemd-networkd[1432]: eth0: Gained IPv6LL Sep 10 04:51:56.165306 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 04:51:56.166759 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 04:51:56.168892 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 04:51:56.171067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:51:56.184222 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 04:51:56.199731 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 04:51:56.199983 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 04:51:56.202528 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 04:51:56.203036 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 04:51:56.726003 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:51:56.727322 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 04:51:56.729834 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 04:51:56.733011 systemd[1]: Startup finished in 1.976s (kernel) + 5.905s (initrd) + 3.278s (userspace) = 11.161s. Sep 10 04:51:57.074635 kubelet[1607]: E0910 04:51:57.074533 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 04:51:57.077235 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 04:51:57.077383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 04:51:57.077684 systemd[1]: kubelet.service: Consumed 754ms CPU time, 259.5M memory peak. Sep 10 04:52:00.917103 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 04:52:00.918053 systemd[1]: Started sshd@0-10.0.0.60:22-10.0.0.1:55448.service - OpenSSH per-connection server daemon (10.0.0.1:55448). Sep 10 04:52:00.982562 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 55448 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:52:00.984058 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:52:00.989803 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 04:52:00.990602 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 04:52:00.995796 systemd-logind[1476]: New session 1 of user core. Sep 10 04:52:01.011471 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 04:52:01.014740 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 04:52:01.032678 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 04:52:01.034606 systemd-logind[1476]: New session c1 of user core. Sep 10 04:52:01.136790 systemd[1626]: Queued start job for default target default.target. Sep 10 04:52:01.159839 systemd[1626]: Created slice app.slice - User Application Slice. Sep 10 04:52:01.159866 systemd[1626]: Reached target paths.target - Paths. Sep 10 04:52:01.159899 systemd[1626]: Reached target timers.target - Timers. Sep 10 04:52:01.160925 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 04:52:01.169193 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 04:52:01.169253 systemd[1626]: Reached target sockets.target - Sockets. Sep 10 04:52:01.169290 systemd[1626]: Reached target basic.target - Basic System. Sep 10 04:52:01.169318 systemd[1626]: Reached target default.target - Main User Target. Sep 10 04:52:01.169343 systemd[1626]: Startup finished in 129ms. Sep 10 04:52:01.169440 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 04:52:01.170790 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 04:52:01.226080 systemd[1]: Started sshd@1-10.0.0.60:22-10.0.0.1:55464.service - OpenSSH per-connection server daemon (10.0.0.1:55464). Sep 10 04:52:01.282517 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 55464 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:52:01.283653 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:52:01.287989 systemd-logind[1476]: New session 2 of user core. Sep 10 04:52:01.298068 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 04:52:01.347624 sshd[1640]: Connection closed by 10.0.0.1 port 55464 Sep 10 04:52:01.348268 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Sep 10 04:52:01.358622 systemd[1]: sshd@1-10.0.0.60:22-10.0.0.1:55464.service: Deactivated successfully. Sep 10 04:52:01.361101 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 04:52:01.361686 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. Sep 10 04:52:01.363654 systemd[1]: Started sshd@2-10.0.0.60:22-10.0.0.1:55480.service - OpenSSH per-connection server daemon (10.0.0.1:55480). Sep 10 04:52:01.364540 systemd-logind[1476]: Removed session 2. Sep 10 04:52:01.412630 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 55480 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:52:01.413602 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:52:01.419155 systemd-logind[1476]: New session 3 of user core. Sep 10 04:52:01.428156 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 04:52:01.478532 sshd[1649]: Connection closed by 10.0.0.1 port 55480 Sep 10 04:52:01.479017 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Sep 10 04:52:01.498728 systemd[1]: sshd@2-10.0.0.60:22-10.0.0.1:55480.service: Deactivated successfully. Sep 10 04:52:01.500686 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 04:52:01.501410 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. Sep 10 04:52:01.503573 systemd[1]: Started sshd@3-10.0.0.60:22-10.0.0.1:55488.service - OpenSSH per-connection server daemon (10.0.0.1:55488). Sep 10 04:52:01.504164 systemd-logind[1476]: Removed session 3. Sep 10 04:52:01.564656 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 55488 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:52:01.565670 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:52:01.569150 systemd-logind[1476]: New session 4 of user core. Sep 10 04:52:01.581152 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 04:52:01.630751 sshd[1658]: Connection closed by 10.0.0.1 port 55488 Sep 10 04:52:01.631145 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Sep 10 04:52:01.640697 systemd[1]: sshd@3-10.0.0.60:22-10.0.0.1:55488.service: Deactivated successfully. Sep 10 04:52:01.643048 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 04:52:01.643604 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Sep 10 04:52:01.645327 systemd[1]: Started sshd@4-10.0.0.60:22-10.0.0.1:55490.service - OpenSSH per-connection server daemon (10.0.0.1:55490). Sep 10 04:52:01.646007 systemd-logind[1476]: Removed session 4. Sep 10 04:52:01.692619 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 55490 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:52:01.693846 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:52:01.697353 systemd-logind[1476]: New session 5 of user core. Sep 10 04:52:01.708072 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 04:52:01.761787 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 04:52:01.762087 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 04:52:01.777827 sudo[1668]: pam_unix(sudo:session): session closed for user root Sep 10 04:52:01.779278 sshd[1667]: Connection closed by 10.0.0.1 port 55490 Sep 10 04:52:01.779641 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Sep 10 04:52:01.790780 systemd[1]: sshd@4-10.0.0.60:22-10.0.0.1:55490.service: Deactivated successfully. Sep 10 04:52:01.793061 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 04:52:01.793622 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Sep 10 04:52:01.795519 systemd[1]: Started sshd@5-10.0.0.60:22-10.0.0.1:55492.service - OpenSSH per-connection server daemon (10.0.0.1:55492). Sep 10 04:52:01.796270 systemd-logind[1476]: Removed session 5. Sep 10 04:52:01.864011 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 55492 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:52:01.865081 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:52:01.868699 systemd-logind[1476]: New session 6 of user core. Sep 10 04:52:01.882130 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 04:52:01.932117 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 04:52:01.932375 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 04:52:02.015297 sudo[1679]: pam_unix(sudo:session): session closed for user root Sep 10 04:52:02.020373 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 04:52:02.020615 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 04:52:02.028980 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 04:52:02.067471 augenrules[1701]: No rules Sep 10 04:52:02.068480 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 04:52:02.068665 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 04:52:02.069967 sudo[1678]: pam_unix(sudo:session): session closed for user root Sep 10 04:52:02.071006 sshd[1677]: Connection closed by 10.0.0.1 port 55492 Sep 10 04:52:02.071293 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Sep 10 04:52:02.081575 systemd[1]: sshd@5-10.0.0.60:22-10.0.0.1:55492.service: Deactivated successfully. Sep 10 04:52:02.084058 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 04:52:02.084745 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Sep 10 04:52:02.086671 systemd[1]: Started sshd@6-10.0.0.60:22-10.0.0.1:55500.service - OpenSSH per-connection server daemon (10.0.0.1:55500). Sep 10 04:52:02.087306 systemd-logind[1476]: Removed session 6. Sep 10 04:52:02.128106 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 55500 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:52:02.129153 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:52:02.133445 systemd-logind[1476]: New session 7 of user core. Sep 10 04:52:02.149085 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 04:52:02.198573 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 04:52:02.198832 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 04:52:02.462542 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 04:52:02.486270 (dockerd)[1734]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 04:52:02.683391 dockerd[1734]: time="2025-09-10T04:52:02.683328380Z" level=info msg="Starting up" Sep 10 04:52:02.684154 dockerd[1734]: time="2025-09-10T04:52:02.684131559Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 04:52:02.695023 dockerd[1734]: time="2025-09-10T04:52:02.694993787Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 10 04:52:02.729299 dockerd[1734]: time="2025-09-10T04:52:02.729211667Z" level=info msg="Loading containers: start." Sep 10 04:52:02.739065 kernel: Initializing XFRM netlink socket Sep 10 04:52:02.912485 systemd-networkd[1432]: docker0: Link UP Sep 10 04:52:02.915278 dockerd[1734]: time="2025-09-10T04:52:02.915226544Z" level=info msg="Loading containers: done." Sep 10 04:52:02.928018 dockerd[1734]: time="2025-09-10T04:52:02.927971204Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 04:52:02.928128 dockerd[1734]: time="2025-09-10T04:52:02.928044757Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 10 04:52:02.928128 dockerd[1734]: time="2025-09-10T04:52:02.928113374Z" level=info msg="Initializing buildkit" Sep 10 04:52:02.946323 dockerd[1734]: time="2025-09-10T04:52:02.946297313Z" level=info msg="Completed buildkit initialization" Sep 10 04:52:02.952698 dockerd[1734]: time="2025-09-10T04:52:02.952554660Z" level=info msg="Daemon has completed initialization" Sep 10 04:52:02.952768 dockerd[1734]: time="2025-09-10T04:52:02.952710191Z" level=info msg="API listen on /run/docker.sock" Sep 10 04:52:02.952780 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 04:52:03.469381 containerd[1503]: time="2025-09-10T04:52:03.469039843Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 10 04:52:04.183774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2152499952.mount: Deactivated successfully. Sep 10 04:52:05.462252 containerd[1503]: time="2025-09-10T04:52:05.462207549Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:05.463231 containerd[1503]: time="2025-09-10T04:52:05.462989647Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 10 04:52:05.464274 containerd[1503]: time="2025-09-10T04:52:05.464245591Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:05.466954 containerd[1503]: time="2025-09-10T04:52:05.466914771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:05.467947 containerd[1503]: time="2025-09-10T04:52:05.467907797Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.998829974s" Sep 10 04:52:05.468052 containerd[1503]: time="2025-09-10T04:52:05.468035507Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 10 04:52:05.469340 containerd[1503]: time="2025-09-10T04:52:05.469312816Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 10 04:52:06.860838 containerd[1503]: time="2025-09-10T04:52:06.860788057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:06.861747 containerd[1503]: time="2025-09-10T04:52:06.861718544Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 10 04:52:06.862792 containerd[1503]: time="2025-09-10T04:52:06.862413955Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:06.864773 containerd[1503]: time="2025-09-10T04:52:06.864735223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:06.866038 containerd[1503]: time="2025-09-10T04:52:06.865999009Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.39665104s" Sep 10 04:52:06.866140 containerd[1503]: time="2025-09-10T04:52:06.866124762Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 10 04:52:06.866850 containerd[1503]: time="2025-09-10T04:52:06.866790117Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 10 04:52:07.270519 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 04:52:07.272264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:52:07.402236 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:52:07.405416 (kubelet)[2023]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 04:52:07.437337 kubelet[2023]: E0910 04:52:07.437294 2023 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 04:52:07.440425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 04:52:07.440550 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 04:52:07.440811 systemd[1]: kubelet.service: Consumed 135ms CPU time, 107.9M memory peak. Sep 10 04:52:08.212174 containerd[1503]: time="2025-09-10T04:52:08.212129012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:08.213018 containerd[1503]: time="2025-09-10T04:52:08.212993481Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 10 04:52:08.213888 containerd[1503]: time="2025-09-10T04:52:08.213851902Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:08.215765 containerd[1503]: time="2025-09-10T04:52:08.215739625Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:08.217467 containerd[1503]: time="2025-09-10T04:52:08.217432872Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.350572509s" Sep 10 04:52:08.217530 containerd[1503]: time="2025-09-10T04:52:08.217479178Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 10 04:52:08.218026 containerd[1503]: time="2025-09-10T04:52:08.217975043Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 10 04:52:09.176005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1692233490.mount: Deactivated successfully. Sep 10 04:52:09.424992 containerd[1503]: time="2025-09-10T04:52:09.424948877Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:09.426026 containerd[1503]: time="2025-09-10T04:52:09.425993977Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 10 04:52:09.427070 containerd[1503]: time="2025-09-10T04:52:09.426964705Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:09.428692 containerd[1503]: time="2025-09-10T04:52:09.428662097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:09.429864 containerd[1503]: time="2025-09-10T04:52:09.429833274Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.21181413s" Sep 10 04:52:09.429864 containerd[1503]: time="2025-09-10T04:52:09.429862230Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 10 04:52:09.430306 containerd[1503]: time="2025-09-10T04:52:09.430281271Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 10 04:52:10.122545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount180978224.mount: Deactivated successfully. Sep 10 04:52:11.036846 containerd[1503]: time="2025-09-10T04:52:11.036800004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:11.037373 containerd[1503]: time="2025-09-10T04:52:11.037343602Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 10 04:52:11.038395 containerd[1503]: time="2025-09-10T04:52:11.038349761Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:11.040923 containerd[1503]: time="2025-09-10T04:52:11.040888019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:11.042058 containerd[1503]: time="2025-09-10T04:52:11.042032269Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.611718159s" Sep 10 04:52:11.042107 containerd[1503]: time="2025-09-10T04:52:11.042058935Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 10 04:52:11.042541 containerd[1503]: time="2025-09-10T04:52:11.042480456Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 04:52:11.474575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount394883045.mount: Deactivated successfully. Sep 10 04:52:11.480091 containerd[1503]: time="2025-09-10T04:52:11.480032964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 04:52:11.480707 containerd[1503]: time="2025-09-10T04:52:11.480679740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 04:52:11.481628 containerd[1503]: time="2025-09-10T04:52:11.481587485Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 04:52:11.484419 containerd[1503]: time="2025-09-10T04:52:11.483888758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 04:52:11.484419 containerd[1503]: time="2025-09-10T04:52:11.484311520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 441.673714ms" Sep 10 04:52:11.484419 containerd[1503]: time="2025-09-10T04:52:11.484335383Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 04:52:11.485036 containerd[1503]: time="2025-09-10T04:52:11.484988045Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 10 04:52:11.933780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount435057987.mount: Deactivated successfully. Sep 10 04:52:13.897073 containerd[1503]: time="2025-09-10T04:52:13.897026077Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:13.898349 containerd[1503]: time="2025-09-10T04:52:13.898318060Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 10 04:52:13.899279 containerd[1503]: time="2025-09-10T04:52:13.899254943Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:13.901953 containerd[1503]: time="2025-09-10T04:52:13.901889266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:13.903056 containerd[1503]: time="2025-09-10T04:52:13.903024614Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.417997453s" Sep 10 04:52:13.903118 containerd[1503]: time="2025-09-10T04:52:13.903057518Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 10 04:52:17.520389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 04:52:17.522375 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:52:17.681463 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:52:17.690256 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 04:52:17.725383 kubelet[2187]: E0910 04:52:17.725330 2187 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 04:52:17.727886 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 04:52:17.728124 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 04:52:17.730025 systemd[1]: kubelet.service: Consumed 132ms CPU time, 106.5M memory peak. Sep 10 04:52:18.109853 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:52:18.110006 systemd[1]: kubelet.service: Consumed 132ms CPU time, 106.5M memory peak. Sep 10 04:52:18.111738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:52:18.130076 systemd[1]: Reload requested from client PID 2203 ('systemctl') (unit session-7.scope)... Sep 10 04:52:18.130090 systemd[1]: Reloading... Sep 10 04:52:18.201968 zram_generator::config[2246]: No configuration found. Sep 10 04:52:18.430992 systemd[1]: Reloading finished in 300 ms. Sep 10 04:52:18.482581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:52:18.484445 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:52:18.486219 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 04:52:18.486467 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:52:18.486503 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95.2M memory peak. Sep 10 04:52:18.487746 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:52:18.617838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:52:18.621356 (kubelet)[2293]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 04:52:18.651572 kubelet[2293]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 04:52:18.651572 kubelet[2293]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 04:52:18.651572 kubelet[2293]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 04:52:18.651887 kubelet[2293]: I0910 04:52:18.651619 2293 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 04:52:19.521682 kubelet[2293]: I0910 04:52:19.521634 2293 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 04:52:19.521682 kubelet[2293]: I0910 04:52:19.521667 2293 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 04:52:19.521937 kubelet[2293]: I0910 04:52:19.521900 2293 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 04:52:19.542400 kubelet[2293]: E0910 04:52:19.542297 2293 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.60:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 10 04:52:19.543292 kubelet[2293]: I0910 04:52:19.543258 2293 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 04:52:19.550231 kubelet[2293]: I0910 04:52:19.550193 2293 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 04:52:19.553290 kubelet[2293]: I0910 04:52:19.553262 2293 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 04:52:19.553582 kubelet[2293]: I0910 04:52:19.553558 2293 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 04:52:19.553748 kubelet[2293]: I0910 04:52:19.553585 2293 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 04:52:19.553839 kubelet[2293]: I0910 04:52:19.553816 2293 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 04:52:19.553839 kubelet[2293]: I0910 04:52:19.553824 2293 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 04:52:19.554612 kubelet[2293]: I0910 04:52:19.554581 2293 state_mem.go:36] "Initialized new in-memory state store" Sep 10 04:52:19.556992 kubelet[2293]: I0910 04:52:19.556971 2293 kubelet.go:480] "Attempting to sync node with API server" Sep 10 04:52:19.557030 kubelet[2293]: I0910 04:52:19.556996 2293 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 04:52:19.557030 kubelet[2293]: I0910 04:52:19.557023 2293 kubelet.go:386] "Adding apiserver pod source" Sep 10 04:52:19.558029 kubelet[2293]: I0910 04:52:19.558006 2293 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 04:52:19.558911 kubelet[2293]: I0910 04:52:19.558867 2293 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 04:52:19.559591 kubelet[2293]: I0910 04:52:19.559556 2293 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 04:52:19.559730 kubelet[2293]: W0910 04:52:19.559704 2293 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 04:52:19.560922 kubelet[2293]: E0910 04:52:19.560883 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.60:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 04:52:19.561464 kubelet[2293]: E0910 04:52:19.561422 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.60:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 04:52:19.562389 kubelet[2293]: I0910 04:52:19.562360 2293 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 04:52:19.562446 kubelet[2293]: I0910 04:52:19.562410 2293 server.go:1289] "Started kubelet" Sep 10 04:52:19.562549 kubelet[2293]: I0910 04:52:19.562516 2293 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 04:52:19.563520 kubelet[2293]: I0910 04:52:19.563500 2293 server.go:317] "Adding debug handlers to kubelet server" Sep 10 04:52:19.566962 kubelet[2293]: I0910 04:52:19.566680 2293 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 04:52:19.567106 kubelet[2293]: I0910 04:52:19.567091 2293 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 04:52:19.567497 kubelet[2293]: E0910 04:52:19.566258 2293 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.60:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.60:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863d2b1897d9362 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 04:52:19.562378082 +0000 UTC m=+0.937946967,LastTimestamp:2025-09-10 04:52:19.562378082 +0000 UTC m=+0.937946967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 04:52:19.567627 kubelet[2293]: I0910 04:52:19.567596 2293 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 04:52:19.567944 kubelet[2293]: I0910 04:52:19.567904 2293 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 04:52:19.569263 kubelet[2293]: I0910 04:52:19.569248 2293 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 04:52:19.569466 kubelet[2293]: I0910 04:52:19.569450 2293 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 04:52:19.569561 kubelet[2293]: I0910 04:52:19.569552 2293 reconciler.go:26] "Reconciler: start to sync state" Sep 10 04:52:19.569985 kubelet[2293]: E0910 04:52:19.569960 2293 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 04:52:19.570170 kubelet[2293]: E0910 04:52:19.570140 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.60:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 04:52:19.570614 kubelet[2293]: I0910 04:52:19.570579 2293 factory.go:223] Registration of the systemd container factory successfully Sep 10 04:52:19.570719 kubelet[2293]: I0910 04:52:19.570696 2293 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 04:52:19.571254 kubelet[2293]: E0910 04:52:19.571209 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="200ms" Sep 10 04:52:19.572031 kubelet[2293]: E0910 04:52:19.572013 2293 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 04:52:19.572417 kubelet[2293]: I0910 04:52:19.572372 2293 factory.go:223] Registration of the containerd container factory successfully Sep 10 04:52:19.583131 kubelet[2293]: I0910 04:52:19.583110 2293 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 04:52:19.583429 kubelet[2293]: I0910 04:52:19.583211 2293 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 04:52:19.583429 kubelet[2293]: I0910 04:52:19.583231 2293 state_mem.go:36] "Initialized new in-memory state store" Sep 10 04:52:19.583867 kubelet[2293]: I0910 04:52:19.583826 2293 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 04:52:19.584862 kubelet[2293]: I0910 04:52:19.584834 2293 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 04:52:19.584862 kubelet[2293]: I0910 04:52:19.584857 2293 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 04:52:19.584921 kubelet[2293]: I0910 04:52:19.584884 2293 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 04:52:19.584921 kubelet[2293]: I0910 04:52:19.584891 2293 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 04:52:19.585034 kubelet[2293]: E0910 04:52:19.584939 2293 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 04:52:19.587095 kubelet[2293]: I0910 04:52:19.587050 2293 policy_none.go:49] "None policy: Start" Sep 10 04:52:19.587095 kubelet[2293]: I0910 04:52:19.587073 2293 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 04:52:19.587188 kubelet[2293]: I0910 04:52:19.587103 2293 state_mem.go:35] "Initializing new in-memory state store" Sep 10 04:52:19.591142 kubelet[2293]: E0910 04:52:19.591104 2293 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.60:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.60:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 04:52:19.593474 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 04:52:19.606529 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 04:52:19.609170 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 04:52:19.617798 kubelet[2293]: E0910 04:52:19.617649 2293 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 04:52:19.617879 kubelet[2293]: I0910 04:52:19.617857 2293 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 04:52:19.617923 kubelet[2293]: I0910 04:52:19.617869 2293 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 04:52:19.618124 kubelet[2293]: I0910 04:52:19.618104 2293 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 04:52:19.619490 kubelet[2293]: E0910 04:52:19.619471 2293 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 04:52:19.619561 kubelet[2293]: E0910 04:52:19.619507 2293 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 04:52:19.694494 systemd[1]: Created slice kubepods-burstable-podd262ecd4290695bb64a5ad64ad13c0dd.slice - libcontainer container kubepods-burstable-podd262ecd4290695bb64a5ad64ad13c0dd.slice. Sep 10 04:52:19.719150 kubelet[2293]: I0910 04:52:19.719112 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 04:52:19.719873 kubelet[2293]: E0910 04:52:19.719842 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Sep 10 04:52:19.725103 kubelet[2293]: E0910 04:52:19.725075 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 04:52:19.727429 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 10 04:52:19.751953 kubelet[2293]: E0910 04:52:19.751885 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 04:52:19.754086 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 10 04:52:19.755371 kubelet[2293]: E0910 04:52:19.755352 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 04:52:19.771901 kubelet[2293]: E0910 04:52:19.771810 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="400ms" Sep 10 04:52:19.871264 kubelet[2293]: I0910 04:52:19.871120 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:19.871264 kubelet[2293]: I0910 04:52:19.871161 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:19.871264 kubelet[2293]: I0910 04:52:19.871186 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:19.871264 kubelet[2293]: I0910 04:52:19.871216 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:19.871264 kubelet[2293]: I0910 04:52:19.871238 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 10 04:52:19.871461 kubelet[2293]: I0910 04:52:19.871276 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d262ecd4290695bb64a5ad64ad13c0dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d262ecd4290695bb64a5ad64ad13c0dd\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:52:19.871461 kubelet[2293]: I0910 04:52:19.871312 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:19.871461 kubelet[2293]: I0910 04:52:19.871330 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d262ecd4290695bb64a5ad64ad13c0dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d262ecd4290695bb64a5ad64ad13c0dd\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:52:19.871461 kubelet[2293]: I0910 04:52:19.871348 2293 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d262ecd4290695bb64a5ad64ad13c0dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d262ecd4290695bb64a5ad64ad13c0dd\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:52:19.921444 kubelet[2293]: I0910 04:52:19.921413 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 04:52:19.921799 kubelet[2293]: E0910 04:52:19.921748 2293 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.60:6443/api/v1/nodes\": dial tcp 10.0.0.60:6443: connect: connection refused" node="localhost" Sep 10 04:52:20.027074 containerd[1503]: time="2025-09-10T04:52:20.026986172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d262ecd4290695bb64a5ad64ad13c0dd,Namespace:kube-system,Attempt:0,}" Sep 10 04:52:20.043798 containerd[1503]: time="2025-09-10T04:52:20.043752418Z" level=info msg="connecting to shim d8c3c9e117b49c380ed5b8be9553050e7589eaed2e2a07a6f63b4ca06136c1aa" address="unix:///run/containerd/s/84881b32ace8bd97429b679b5fd5f722acb9f1bfdad5165059212c5387d9dec1" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:52:20.054135 containerd[1503]: time="2025-09-10T04:52:20.054096743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 10 04:52:20.055916 containerd[1503]: time="2025-09-10T04:52:20.055887217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 10 04:52:20.069083 systemd[1]: Started cri-containerd-d8c3c9e117b49c380ed5b8be9553050e7589eaed2e2a07a6f63b4ca06136c1aa.scope - libcontainer container d8c3c9e117b49c380ed5b8be9553050e7589eaed2e2a07a6f63b4ca06136c1aa. Sep 10 04:52:20.082034 containerd[1503]: time="2025-09-10T04:52:20.081991500Z" level=info msg="connecting to shim 595c56150d32c3ae8b34268dfaa879c4aafdde4b2940476452a1d02d9559578e" address="unix:///run/containerd/s/4a5387370a28246782eaeb1b02403f5d6c47748fe6319e3cffa644cca8708fbe" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:52:20.085080 containerd[1503]: time="2025-09-10T04:52:20.085009165Z" level=info msg="connecting to shim b7d99ad9efd55c896a6cd35a71088a31f8c8a5b5a6162fc3588445bbfc14b74f" address="unix:///run/containerd/s/645116d2b6a66d08447d63c4e97a5c2a1310897c41363a482708495457cf4ccc" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:52:20.110103 systemd[1]: Started cri-containerd-b7d99ad9efd55c896a6cd35a71088a31f8c8a5b5a6162fc3588445bbfc14b74f.scope - libcontainer container b7d99ad9efd55c896a6cd35a71088a31f8c8a5b5a6162fc3588445bbfc14b74f. Sep 10 04:52:20.112775 containerd[1503]: time="2025-09-10T04:52:20.112736793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d262ecd4290695bb64a5ad64ad13c0dd,Namespace:kube-system,Attempt:0,} returns sandbox id \"d8c3c9e117b49c380ed5b8be9553050e7589eaed2e2a07a6f63b4ca06136c1aa\"" Sep 10 04:52:20.113216 systemd[1]: Started cri-containerd-595c56150d32c3ae8b34268dfaa879c4aafdde4b2940476452a1d02d9559578e.scope - libcontainer container 595c56150d32c3ae8b34268dfaa879c4aafdde4b2940476452a1d02d9559578e. Sep 10 04:52:20.121759 containerd[1503]: time="2025-09-10T04:52:20.121721329Z" level=info msg="CreateContainer within sandbox \"d8c3c9e117b49c380ed5b8be9553050e7589eaed2e2a07a6f63b4ca06136c1aa\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 04:52:20.130274 containerd[1503]: time="2025-09-10T04:52:20.130226927Z" level=info msg="Container aa64ac2b5294c1d33e50a35fb4852f6db9aecd563c448d51863d87736b3f7c7a: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:52:20.139126 containerd[1503]: time="2025-09-10T04:52:20.139021328Z" level=info msg="CreateContainer within sandbox \"d8c3c9e117b49c380ed5b8be9553050e7589eaed2e2a07a6f63b4ca06136c1aa\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aa64ac2b5294c1d33e50a35fb4852f6db9aecd563c448d51863d87736b3f7c7a\"" Sep 10 04:52:20.139746 containerd[1503]: time="2025-09-10T04:52:20.139718128Z" level=info msg="StartContainer for \"aa64ac2b5294c1d33e50a35fb4852f6db9aecd563c448d51863d87736b3f7c7a\"" Sep 10 04:52:20.142159 containerd[1503]: time="2025-09-10T04:52:20.142128379Z" level=info msg="connecting to shim aa64ac2b5294c1d33e50a35fb4852f6db9aecd563c448d51863d87736b3f7c7a" address="unix:///run/containerd/s/84881b32ace8bd97429b679b5fd5f722acb9f1bfdad5165059212c5387d9dec1" protocol=ttrpc version=3 Sep 10 04:52:20.155880 containerd[1503]: time="2025-09-10T04:52:20.155844791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7d99ad9efd55c896a6cd35a71088a31f8c8a5b5a6162fc3588445bbfc14b74f\"" Sep 10 04:52:20.161450 containerd[1503]: time="2025-09-10T04:52:20.161416908Z" level=info msg="CreateContainer within sandbox \"b7d99ad9efd55c896a6cd35a71088a31f8c8a5b5a6162fc3588445bbfc14b74f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 04:52:20.164507 containerd[1503]: time="2025-09-10T04:52:20.164382478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"595c56150d32c3ae8b34268dfaa879c4aafdde4b2940476452a1d02d9559578e\"" Sep 10 04:52:20.167096 systemd[1]: Started cri-containerd-aa64ac2b5294c1d33e50a35fb4852f6db9aecd563c448d51863d87736b3f7c7a.scope - libcontainer container aa64ac2b5294c1d33e50a35fb4852f6db9aecd563c448d51863d87736b3f7c7a. Sep 10 04:52:20.168169 containerd[1503]: time="2025-09-10T04:52:20.168127232Z" level=info msg="CreateContainer within sandbox \"595c56150d32c3ae8b34268dfaa879c4aafdde4b2940476452a1d02d9559578e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 04:52:20.172773 containerd[1503]: time="2025-09-10T04:52:20.172737273Z" level=info msg="Container 8025b6674e8f6d3bb82b568a101406448dc5b6e26402998ce1b85e60f6b4b506: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:52:20.172857 kubelet[2293]: E0910 04:52:20.172802 2293 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.60:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.60:6443: connect: connection refused" interval="800ms" Sep 10 04:52:20.176418 containerd[1503]: time="2025-09-10T04:52:20.176386599Z" level=info msg="Container d65cbccd86c2246320ef68015c0c1fc4511b0e38810e650b7c84d27585f736e7: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:52:20.179519 containerd[1503]: time="2025-09-10T04:52:20.179478246Z" level=info msg="CreateContainer within sandbox \"b7d99ad9efd55c896a6cd35a71088a31f8c8a5b5a6162fc3588445bbfc14b74f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8025b6674e8f6d3bb82b568a101406448dc5b6e26402998ce1b85e60f6b4b506\"" Sep 10 04:52:20.180027 containerd[1503]: time="2025-09-10T04:52:20.179966345Z" level=info msg="StartContainer for \"8025b6674e8f6d3bb82b568a101406448dc5b6e26402998ce1b85e60f6b4b506\"" Sep 10 04:52:20.181503 containerd[1503]: time="2025-09-10T04:52:20.181466255Z" level=info msg="connecting to shim 8025b6674e8f6d3bb82b568a101406448dc5b6e26402998ce1b85e60f6b4b506" address="unix:///run/containerd/s/645116d2b6a66d08447d63c4e97a5c2a1310897c41363a482708495457cf4ccc" protocol=ttrpc version=3 Sep 10 04:52:20.182868 containerd[1503]: time="2025-09-10T04:52:20.182812521Z" level=info msg="CreateContainer within sandbox \"595c56150d32c3ae8b34268dfaa879c4aafdde4b2940476452a1d02d9559578e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d65cbccd86c2246320ef68015c0c1fc4511b0e38810e650b7c84d27585f736e7\"" Sep 10 04:52:20.183331 containerd[1503]: time="2025-09-10T04:52:20.183308784Z" level=info msg="StartContainer for \"d65cbccd86c2246320ef68015c0c1fc4511b0e38810e650b7c84d27585f736e7\"" Sep 10 04:52:20.184579 containerd[1503]: time="2025-09-10T04:52:20.184544498Z" level=info msg="connecting to shim d65cbccd86c2246320ef68015c0c1fc4511b0e38810e650b7c84d27585f736e7" address="unix:///run/containerd/s/4a5387370a28246782eaeb1b02403f5d6c47748fe6319e3cffa644cca8708fbe" protocol=ttrpc version=3 Sep 10 04:52:20.201136 systemd[1]: Started cri-containerd-8025b6674e8f6d3bb82b568a101406448dc5b6e26402998ce1b85e60f6b4b506.scope - libcontainer container 8025b6674e8f6d3bb82b568a101406448dc5b6e26402998ce1b85e60f6b4b506. Sep 10 04:52:20.210941 containerd[1503]: time="2025-09-10T04:52:20.210891130Z" level=info msg="StartContainer for \"aa64ac2b5294c1d33e50a35fb4852f6db9aecd563c448d51863d87736b3f7c7a\" returns successfully" Sep 10 04:52:20.215096 systemd[1]: Started cri-containerd-d65cbccd86c2246320ef68015c0c1fc4511b0e38810e650b7c84d27585f736e7.scope - libcontainer container d65cbccd86c2246320ef68015c0c1fc4511b0e38810e650b7c84d27585f736e7. Sep 10 04:52:20.255038 containerd[1503]: time="2025-09-10T04:52:20.254856854Z" level=info msg="StartContainer for \"8025b6674e8f6d3bb82b568a101406448dc5b6e26402998ce1b85e60f6b4b506\" returns successfully" Sep 10 04:52:20.269488 containerd[1503]: time="2025-09-10T04:52:20.269432152Z" level=info msg="StartContainer for \"d65cbccd86c2246320ef68015c0c1fc4511b0e38810e650b7c84d27585f736e7\" returns successfully" Sep 10 04:52:20.324576 kubelet[2293]: I0910 04:52:20.324461 2293 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 04:52:20.596817 kubelet[2293]: E0910 04:52:20.596590 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 04:52:20.600381 kubelet[2293]: E0910 04:52:20.600201 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 04:52:20.600381 kubelet[2293]: E0910 04:52:20.600259 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 04:52:21.603397 kubelet[2293]: E0910 04:52:21.603361 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 04:52:21.603715 kubelet[2293]: E0910 04:52:21.603428 2293 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 04:52:21.901845 kubelet[2293]: E0910 04:52:21.901710 2293 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 04:52:21.957341 kubelet[2293]: I0910 04:52:21.957301 2293 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 04:52:21.957341 kubelet[2293]: E0910 04:52:21.957339 2293 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 04:52:21.970881 kubelet[2293]: I0910 04:52:21.970848 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 04:52:21.980152 kubelet[2293]: E0910 04:52:21.980074 2293 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1863d2b1897d9362 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 04:52:19.562378082 +0000 UTC m=+0.937946967,LastTimestamp:2025-09-10 04:52:19.562378082 +0000 UTC m=+0.937946967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 04:52:22.028295 kubelet[2293]: E0910 04:52:22.028256 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 04:52:22.028295 kubelet[2293]: I0910 04:52:22.028289 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 04:52:22.032670 kubelet[2293]: E0910 04:52:22.030355 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 04:52:22.032670 kubelet[2293]: I0910 04:52:22.030389 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:22.032670 kubelet[2293]: E0910 04:52:22.032404 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:22.378723 kubelet[2293]: I0910 04:52:22.378687 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:22.380951 kubelet[2293]: E0910 04:52:22.380791 2293 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:22.559355 kubelet[2293]: I0910 04:52:22.559320 2293 apiserver.go:52] "Watching apiserver" Sep 10 04:52:22.570156 kubelet[2293]: I0910 04:52:22.570130 2293 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 04:52:24.061121 systemd[1]: Reload requested from client PID 2578 ('systemctl') (unit session-7.scope)... Sep 10 04:52:24.061137 systemd[1]: Reloading... Sep 10 04:52:24.063607 kubelet[2293]: I0910 04:52:24.063561 2293 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 04:52:24.132957 zram_generator::config[2627]: No configuration found. Sep 10 04:52:24.294481 systemd[1]: Reloading finished in 233 ms. Sep 10 04:52:24.331020 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:52:24.346355 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 04:52:24.346587 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:52:24.346638 systemd[1]: kubelet.service: Consumed 1.307s CPU time, 127.2M memory peak. Sep 10 04:52:24.349008 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 04:52:24.472296 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 04:52:24.476446 (kubelet)[2663]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 04:52:24.513060 kubelet[2663]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 04:52:24.513060 kubelet[2663]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 04:52:24.513060 kubelet[2663]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 04:52:24.514144 kubelet[2663]: I0910 04:52:24.513156 2663 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 04:52:24.519048 kubelet[2663]: I0910 04:52:24.519017 2663 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 04:52:24.519048 kubelet[2663]: I0910 04:52:24.519044 2663 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 04:52:24.519275 kubelet[2663]: I0910 04:52:24.519256 2663 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 04:52:24.520481 kubelet[2663]: I0910 04:52:24.520457 2663 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 10 04:52:24.523681 kubelet[2663]: I0910 04:52:24.523642 2663 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 04:52:24.527348 kubelet[2663]: I0910 04:52:24.527322 2663 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 04:52:24.530032 kubelet[2663]: I0910 04:52:24.529990 2663 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 04:52:24.530240 kubelet[2663]: I0910 04:52:24.530207 2663 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 04:52:24.530371 kubelet[2663]: I0910 04:52:24.530232 2663 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 04:52:24.530444 kubelet[2663]: I0910 04:52:24.530384 2663 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 04:52:24.530444 kubelet[2663]: I0910 04:52:24.530391 2663 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 04:52:24.530444 kubelet[2663]: I0910 04:52:24.530430 2663 state_mem.go:36] "Initialized new in-memory state store" Sep 10 04:52:24.530592 kubelet[2663]: I0910 04:52:24.530577 2663 kubelet.go:480] "Attempting to sync node with API server" Sep 10 04:52:24.530613 kubelet[2663]: I0910 04:52:24.530597 2663 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 04:52:24.530984 kubelet[2663]: I0910 04:52:24.530963 2663 kubelet.go:386] "Adding apiserver pod source" Sep 10 04:52:24.531023 kubelet[2663]: I0910 04:52:24.531005 2663 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 04:52:24.532249 kubelet[2663]: I0910 04:52:24.532226 2663 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 04:52:24.532851 kubelet[2663]: I0910 04:52:24.532819 2663 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 04:52:24.536941 kubelet[2663]: I0910 04:52:24.536302 2663 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 04:52:24.536941 kubelet[2663]: I0910 04:52:24.536344 2663 server.go:1289] "Started kubelet" Sep 10 04:52:24.542032 kubelet[2663]: I0910 04:52:24.541598 2663 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 04:52:24.544981 kubelet[2663]: I0910 04:52:24.544957 2663 server.go:317] "Adding debug handlers to kubelet server" Sep 10 04:52:24.545361 kubelet[2663]: I0910 04:52:24.545295 2663 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 04:52:24.545749 kubelet[2663]: I0910 04:52:24.545722 2663 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 04:52:24.546105 kubelet[2663]: I0910 04:52:24.541285 2663 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 04:52:24.546366 kubelet[2663]: I0910 04:52:24.546334 2663 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 04:52:24.546414 kubelet[2663]: I0910 04:52:24.546410 2663 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 04:52:24.546525 kubelet[2663]: E0910 04:52:24.546502 2663 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 04:52:24.546720 kubelet[2663]: I0910 04:52:24.546699 2663 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 04:52:24.546839 kubelet[2663]: I0910 04:52:24.546824 2663 reconciler.go:26] "Reconciler: start to sync state" Sep 10 04:52:24.554115 kubelet[2663]: I0910 04:52:24.554083 2663 factory.go:223] Registration of the systemd container factory successfully Sep 10 04:52:24.554623 kubelet[2663]: I0910 04:52:24.554581 2663 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 04:52:24.556126 kubelet[2663]: E0910 04:52:24.556093 2663 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 04:52:24.557192 kubelet[2663]: I0910 04:52:24.557172 2663 factory.go:223] Registration of the containerd container factory successfully Sep 10 04:52:24.560296 kubelet[2663]: I0910 04:52:24.560254 2663 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 04:52:24.568017 kubelet[2663]: I0910 04:52:24.567980 2663 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 04:52:24.568017 kubelet[2663]: I0910 04:52:24.568005 2663 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 04:52:24.568149 kubelet[2663]: I0910 04:52:24.568034 2663 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 04:52:24.568149 kubelet[2663]: I0910 04:52:24.568040 2663 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 04:52:24.568149 kubelet[2663]: E0910 04:52:24.568078 2663 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 04:52:24.592190 kubelet[2663]: I0910 04:52:24.592101 2663 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 04:52:24.592190 kubelet[2663]: I0910 04:52:24.592122 2663 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 04:52:24.592190 kubelet[2663]: I0910 04:52:24.592143 2663 state_mem.go:36] "Initialized new in-memory state store" Sep 10 04:52:24.593119 kubelet[2663]: I0910 04:52:24.593086 2663 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 04:52:24.593119 kubelet[2663]: I0910 04:52:24.593111 2663 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 04:52:24.593394 kubelet[2663]: I0910 04:52:24.593130 2663 policy_none.go:49] "None policy: Start" Sep 10 04:52:24.593394 kubelet[2663]: I0910 04:52:24.593140 2663 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 04:52:24.593394 kubelet[2663]: I0910 04:52:24.593149 2663 state_mem.go:35] "Initializing new in-memory state store" Sep 10 04:52:24.593394 kubelet[2663]: I0910 04:52:24.593237 2663 state_mem.go:75] "Updated machine memory state" Sep 10 04:52:24.597085 kubelet[2663]: E0910 04:52:24.597060 2663 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 04:52:24.597334 kubelet[2663]: I0910 04:52:24.597317 2663 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 04:52:24.597437 kubelet[2663]: I0910 04:52:24.597407 2663 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 04:52:24.597718 kubelet[2663]: I0910 04:52:24.597658 2663 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 04:52:24.599189 kubelet[2663]: E0910 04:52:24.599165 2663 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 04:52:24.669363 kubelet[2663]: I0910 04:52:24.669328 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 04:52:24.669502 kubelet[2663]: I0910 04:52:24.669424 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 04:52:24.670294 kubelet[2663]: I0910 04:52:24.670272 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:24.676095 kubelet[2663]: E0910 04:52:24.676058 2663 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 04:52:24.700487 kubelet[2663]: I0910 04:52:24.700457 2663 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 04:52:24.707372 kubelet[2663]: I0910 04:52:24.707347 2663 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 04:52:24.707491 kubelet[2663]: I0910 04:52:24.707417 2663 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 04:52:24.748540 kubelet[2663]: I0910 04:52:24.748498 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d262ecd4290695bb64a5ad64ad13c0dd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d262ecd4290695bb64a5ad64ad13c0dd\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:52:24.748540 kubelet[2663]: I0910 04:52:24.748536 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d262ecd4290695bb64a5ad64ad13c0dd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d262ecd4290695bb64a5ad64ad13c0dd\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:52:24.748690 kubelet[2663]: I0910 04:52:24.748557 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:24.748690 kubelet[2663]: I0910 04:52:24.748575 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:24.748690 kubelet[2663]: I0910 04:52:24.748617 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:24.748690 kubelet[2663]: I0910 04:52:24.748671 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 10 04:52:24.748774 kubelet[2663]: I0910 04:52:24.748697 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d262ecd4290695bb64a5ad64ad13c0dd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d262ecd4290695bb64a5ad64ad13c0dd\") " pod="kube-system/kube-apiserver-localhost" Sep 10 04:52:24.748774 kubelet[2663]: I0910 04:52:24.748721 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:24.748774 kubelet[2663]: I0910 04:52:24.748737 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:25.531552 kubelet[2663]: I0910 04:52:25.531515 2663 apiserver.go:52] "Watching apiserver" Sep 10 04:52:25.547283 kubelet[2663]: I0910 04:52:25.547248 2663 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 04:52:25.579810 kubelet[2663]: I0910 04:52:25.579783 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 04:52:25.579923 kubelet[2663]: I0910 04:52:25.579897 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 04:52:25.580082 kubelet[2663]: I0910 04:52:25.580049 2663 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:25.586759 kubelet[2663]: E0910 04:52:25.586641 2663 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 10 04:52:25.586869 kubelet[2663]: E0910 04:52:25.586645 2663 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 04:52:25.587093 kubelet[2663]: E0910 04:52:25.587075 2663 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 04:52:25.611026 kubelet[2663]: I0910 04:52:25.610956 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.6109174400000001 podStartE2EDuration="1.61091744s" podCreationTimestamp="2025-09-10 04:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:52:25.603605325 +0000 UTC m=+1.123284398" watchObservedRunningTime="2025-09-10 04:52:25.61091744 +0000 UTC m=+1.130596473" Sep 10 04:52:25.611146 kubelet[2663]: I0910 04:52:25.611073 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.611068782 podStartE2EDuration="1.611068782s" podCreationTimestamp="2025-09-10 04:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:52:25.610767938 +0000 UTC m=+1.130446971" watchObservedRunningTime="2025-09-10 04:52:25.611068782 +0000 UTC m=+1.130747815" Sep 10 04:52:25.635157 kubelet[2663]: I0910 04:52:25.634288 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.634271954 podStartE2EDuration="1.634271954s" podCreationTimestamp="2025-09-10 04:52:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:52:25.626410118 +0000 UTC m=+1.146089151" watchObservedRunningTime="2025-09-10 04:52:25.634271954 +0000 UTC m=+1.153950987" Sep 10 04:52:30.538097 kubelet[2663]: I0910 04:52:30.538061 2663 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 04:52:30.538526 containerd[1503]: time="2025-09-10T04:52:30.538491211Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 04:52:30.538755 kubelet[2663]: I0910 04:52:30.538698 2663 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 04:52:31.294816 systemd[1]: Created slice kubepods-besteffort-pod9adc5c91_48af_43b1_b343_3d53d7f432f7.slice - libcontainer container kubepods-besteffort-pod9adc5c91_48af_43b1_b343_3d53d7f432f7.slice. Sep 10 04:52:31.395643 kubelet[2663]: I0910 04:52:31.395571 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9adc5c91-48af-43b1-b343-3d53d7f432f7-kube-proxy\") pod \"kube-proxy-mt99d\" (UID: \"9adc5c91-48af-43b1-b343-3d53d7f432f7\") " pod="kube-system/kube-proxy-mt99d" Sep 10 04:52:31.395643 kubelet[2663]: I0910 04:52:31.395617 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9adc5c91-48af-43b1-b343-3d53d7f432f7-lib-modules\") pod \"kube-proxy-mt99d\" (UID: \"9adc5c91-48af-43b1-b343-3d53d7f432f7\") " pod="kube-system/kube-proxy-mt99d" Sep 10 04:52:31.395643 kubelet[2663]: I0910 04:52:31.395637 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w48w6\" (UniqueName: \"kubernetes.io/projected/9adc5c91-48af-43b1-b343-3d53d7f432f7-kube-api-access-w48w6\") pod \"kube-proxy-mt99d\" (UID: \"9adc5c91-48af-43b1-b343-3d53d7f432f7\") " pod="kube-system/kube-proxy-mt99d" Sep 10 04:52:31.395643 kubelet[2663]: I0910 04:52:31.395657 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9adc5c91-48af-43b1-b343-3d53d7f432f7-xtables-lock\") pod \"kube-proxy-mt99d\" (UID: \"9adc5c91-48af-43b1-b343-3d53d7f432f7\") " pod="kube-system/kube-proxy-mt99d" Sep 10 04:52:31.606418 containerd[1503]: time="2025-09-10T04:52:31.605885482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mt99d,Uid:9adc5c91-48af-43b1-b343-3d53d7f432f7,Namespace:kube-system,Attempt:0,}" Sep 10 04:52:31.617258 systemd[1]: Created slice kubepods-besteffort-podef72fcc3_1772_4b2e_b955_7944f6ad07c7.slice - libcontainer container kubepods-besteffort-podef72fcc3_1772_4b2e_b955_7944f6ad07c7.slice. Sep 10 04:52:31.625418 containerd[1503]: time="2025-09-10T04:52:31.625379991Z" level=info msg="connecting to shim 91c30656b951ff6a4f95e5e81d4fdd2bda30c6d7098c05c9c9c4bc4d5f0e10b2" address="unix:///run/containerd/s/c48f8b2c2e107b50ef018b79cc54ff1a22f03331db2f150d7b731721b257e143" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:52:31.653106 systemd[1]: Started cri-containerd-91c30656b951ff6a4f95e5e81d4fdd2bda30c6d7098c05c9c9c4bc4d5f0e10b2.scope - libcontainer container 91c30656b951ff6a4f95e5e81d4fdd2bda30c6d7098c05c9c9c4bc4d5f0e10b2. Sep 10 04:52:31.686823 containerd[1503]: time="2025-09-10T04:52:31.686777448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mt99d,Uid:9adc5c91-48af-43b1-b343-3d53d7f432f7,Namespace:kube-system,Attempt:0,} returns sandbox id \"91c30656b951ff6a4f95e5e81d4fdd2bda30c6d7098c05c9c9c4bc4d5f0e10b2\"" Sep 10 04:52:31.705429 kubelet[2663]: I0910 04:52:31.705331 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ef72fcc3-1772-4b2e-b955-7944f6ad07c7-var-lib-calico\") pod \"tigera-operator-755d956888-m4s7f\" (UID: \"ef72fcc3-1772-4b2e-b955-7944f6ad07c7\") " pod="tigera-operator/tigera-operator-755d956888-m4s7f" Sep 10 04:52:31.705429 kubelet[2663]: I0910 04:52:31.705407 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45wp8\" (UniqueName: \"kubernetes.io/projected/ef72fcc3-1772-4b2e-b955-7944f6ad07c7-kube-api-access-45wp8\") pod \"tigera-operator-755d956888-m4s7f\" (UID: \"ef72fcc3-1772-4b2e-b955-7944f6ad07c7\") " pod="tigera-operator/tigera-operator-755d956888-m4s7f" Sep 10 04:52:31.753273 containerd[1503]: time="2025-09-10T04:52:31.753221864Z" level=info msg="CreateContainer within sandbox \"91c30656b951ff6a4f95e5e81d4fdd2bda30c6d7098c05c9c9c4bc4d5f0e10b2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 04:52:31.923818 containerd[1503]: time="2025-09-10T04:52:31.923709976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-m4s7f,Uid:ef72fcc3-1772-4b2e-b955-7944f6ad07c7,Namespace:tigera-operator,Attempt:0,}" Sep 10 04:52:31.973137 containerd[1503]: time="2025-09-10T04:52:31.973088856Z" level=info msg="Container ea98a46cba7476e18c9da48e8ed18ce1c8f6fe0016e91678f8094b7aef32e006: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:52:32.000416 containerd[1503]: time="2025-09-10T04:52:32.000369560Z" level=info msg="CreateContainer within sandbox \"91c30656b951ff6a4f95e5e81d4fdd2bda30c6d7098c05c9c9c4bc4d5f0e10b2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ea98a46cba7476e18c9da48e8ed18ce1c8f6fe0016e91678f8094b7aef32e006\"" Sep 10 04:52:32.001178 containerd[1503]: time="2025-09-10T04:52:32.001137895Z" level=info msg="StartContainer for \"ea98a46cba7476e18c9da48e8ed18ce1c8f6fe0016e91678f8094b7aef32e006\"" Sep 10 04:52:32.003804 containerd[1503]: time="2025-09-10T04:52:32.003765235Z" level=info msg="connecting to shim ea98a46cba7476e18c9da48e8ed18ce1c8f6fe0016e91678f8094b7aef32e006" address="unix:///run/containerd/s/c48f8b2c2e107b50ef018b79cc54ff1a22f03331db2f150d7b731721b257e143" protocol=ttrpc version=3 Sep 10 04:52:32.018592 containerd[1503]: time="2025-09-10T04:52:32.018521390Z" level=info msg="connecting to shim 28b8bd94ea82b1009cdb7f4e2408222ca2636fcdeab0ce674ccaad2ac78ef87c" address="unix:///run/containerd/s/4905f95dcfeffbf98dc952eb437d7bde5e381860c3de8be81eff7c32a781f88b" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:52:32.024108 systemd[1]: Started cri-containerd-ea98a46cba7476e18c9da48e8ed18ce1c8f6fe0016e91678f8094b7aef32e006.scope - libcontainer container ea98a46cba7476e18c9da48e8ed18ce1c8f6fe0016e91678f8094b7aef32e006. Sep 10 04:52:32.045088 systemd[1]: Started cri-containerd-28b8bd94ea82b1009cdb7f4e2408222ca2636fcdeab0ce674ccaad2ac78ef87c.scope - libcontainer container 28b8bd94ea82b1009cdb7f4e2408222ca2636fcdeab0ce674ccaad2ac78ef87c. Sep 10 04:52:32.078258 containerd[1503]: time="2025-09-10T04:52:32.077965197Z" level=info msg="StartContainer for \"ea98a46cba7476e18c9da48e8ed18ce1c8f6fe0016e91678f8094b7aef32e006\" returns successfully" Sep 10 04:52:32.085826 containerd[1503]: time="2025-09-10T04:52:32.085793325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-m4s7f,Uid:ef72fcc3-1772-4b2e-b955-7944f6ad07c7,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"28b8bd94ea82b1009cdb7f4e2408222ca2636fcdeab0ce674ccaad2ac78ef87c\"" Sep 10 04:52:32.088015 containerd[1503]: time="2025-09-10T04:52:32.087988673Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 04:52:32.613043 kubelet[2663]: I0910 04:52:32.612987 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mt99d" podStartSLOduration=1.612953021 podStartE2EDuration="1.612953021s" podCreationTimestamp="2025-09-10 04:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:52:32.612397824 +0000 UTC m=+8.132076857" watchObservedRunningTime="2025-09-10 04:52:32.612953021 +0000 UTC m=+8.132632054" Sep 10 04:52:33.287269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2210919357.mount: Deactivated successfully. Sep 10 04:52:33.917373 containerd[1503]: time="2025-09-10T04:52:33.917299905Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:33.918317 containerd[1503]: time="2025-09-10T04:52:33.918282528Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 04:52:33.919136 containerd[1503]: time="2025-09-10T04:52:33.919110501Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:33.921551 containerd[1503]: time="2025-09-10T04:52:33.921521415Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:33.922375 containerd[1503]: time="2025-09-10T04:52:33.922319106Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.834300791s" Sep 10 04:52:33.922375 containerd[1503]: time="2025-09-10T04:52:33.922355268Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 04:52:33.925629 containerd[1503]: time="2025-09-10T04:52:33.925598875Z" level=info msg="CreateContainer within sandbox \"28b8bd94ea82b1009cdb7f4e2408222ca2636fcdeab0ce674ccaad2ac78ef87c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 04:52:33.932861 containerd[1503]: time="2025-09-10T04:52:33.932819615Z" level=info msg="Container 47bc0d9b3c06eff37d6402ba64d729bfa37e7af470934c21ca5db82f903b6908: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:52:33.978516 containerd[1503]: time="2025-09-10T04:52:33.978471207Z" level=info msg="CreateContainer within sandbox \"28b8bd94ea82b1009cdb7f4e2408222ca2636fcdeab0ce674ccaad2ac78ef87c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"47bc0d9b3c06eff37d6402ba64d729bfa37e7af470934c21ca5db82f903b6908\"" Sep 10 04:52:33.979370 containerd[1503]: time="2025-09-10T04:52:33.979346543Z" level=info msg="StartContainer for \"47bc0d9b3c06eff37d6402ba64d729bfa37e7af470934c21ca5db82f903b6908\"" Sep 10 04:52:33.980484 containerd[1503]: time="2025-09-10T04:52:33.980458174Z" level=info msg="connecting to shim 47bc0d9b3c06eff37d6402ba64d729bfa37e7af470934c21ca5db82f903b6908" address="unix:///run/containerd/s/4905f95dcfeffbf98dc952eb437d7bde5e381860c3de8be81eff7c32a781f88b" protocol=ttrpc version=3 Sep 10 04:52:34.002089 systemd[1]: Started cri-containerd-47bc0d9b3c06eff37d6402ba64d729bfa37e7af470934c21ca5db82f903b6908.scope - libcontainer container 47bc0d9b3c06eff37d6402ba64d729bfa37e7af470934c21ca5db82f903b6908. Sep 10 04:52:34.025854 containerd[1503]: time="2025-09-10T04:52:34.025782140Z" level=info msg="StartContainer for \"47bc0d9b3c06eff37d6402ba64d729bfa37e7af470934c21ca5db82f903b6908\" returns successfully" Sep 10 04:52:34.610289 kubelet[2663]: I0910 04:52:34.610076 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-m4s7f" podStartSLOduration=1.7745729940000001 podStartE2EDuration="3.610061862s" podCreationTimestamp="2025-09-10 04:52:31 +0000 UTC" firstStartedPulling="2025-09-10 04:52:32.087641769 +0000 UTC m=+7.607320762" lastFinishedPulling="2025-09-10 04:52:33.923130637 +0000 UTC m=+9.442809630" observedRunningTime="2025-09-10 04:52:34.609957496 +0000 UTC m=+10.129636529" watchObservedRunningTime="2025-09-10 04:52:34.610061862 +0000 UTC m=+10.129740895" Sep 10 04:52:39.207064 sudo[1714]: pam_unix(sudo:session): session closed for user root Sep 10 04:52:39.209002 sshd[1713]: Connection closed by 10.0.0.1 port 55500 Sep 10 04:52:39.209852 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Sep 10 04:52:39.215267 systemd[1]: sshd@6-10.0.0.60:22-10.0.0.1:55500.service: Deactivated successfully. Sep 10 04:52:39.220008 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 04:52:39.220246 systemd[1]: session-7.scope: Consumed 6.053s CPU time, 219.8M memory peak. Sep 10 04:52:39.222061 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Sep 10 04:52:39.224246 systemd-logind[1476]: Removed session 7. Sep 10 04:52:40.573053 update_engine[1479]: I20250910 04:52:40.572981 1479 update_attempter.cc:509] Updating boot flags... Sep 10 04:52:45.080712 systemd[1]: Created slice kubepods-besteffort-pod9f29e05e_8235_40c4_9ee4_be5fefbf2c01.slice - libcontainer container kubepods-besteffort-pod9f29e05e_8235_40c4_9ee4_be5fefbf2c01.slice. Sep 10 04:52:45.094048 kubelet[2663]: I0910 04:52:45.093978 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f29e05e-8235-40c4-9ee4-be5fefbf2c01-tigera-ca-bundle\") pod \"calico-typha-694458ffc9-nl7t9\" (UID: \"9f29e05e-8235-40c4-9ee4-be5fefbf2c01\") " pod="calico-system/calico-typha-694458ffc9-nl7t9" Sep 10 04:52:45.094559 kubelet[2663]: I0910 04:52:45.094214 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9f29e05e-8235-40c4-9ee4-be5fefbf2c01-typha-certs\") pod \"calico-typha-694458ffc9-nl7t9\" (UID: \"9f29e05e-8235-40c4-9ee4-be5fefbf2c01\") " pod="calico-system/calico-typha-694458ffc9-nl7t9" Sep 10 04:52:45.095162 kubelet[2663]: I0910 04:52:45.095120 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg245\" (UniqueName: \"kubernetes.io/projected/9f29e05e-8235-40c4-9ee4-be5fefbf2c01-kube-api-access-xg245\") pod \"calico-typha-694458ffc9-nl7t9\" (UID: \"9f29e05e-8235-40c4-9ee4-be5fefbf2c01\") " pod="calico-system/calico-typha-694458ffc9-nl7t9" Sep 10 04:52:45.351958 systemd[1]: Created slice kubepods-besteffort-podd185ae9a_8bef_44b6_ad7d_34a5ae8782bb.slice - libcontainer container kubepods-besteffort-podd185ae9a_8bef_44b6_ad7d_34a5ae8782bb.slice. Sep 10 04:52:45.388987 containerd[1503]: time="2025-09-10T04:52:45.388945572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-694458ffc9-nl7t9,Uid:9f29e05e-8235-40c4-9ee4-be5fefbf2c01,Namespace:calico-system,Attempt:0,}" Sep 10 04:52:45.397227 kubelet[2663]: I0910 04:52:45.397164 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-var-lib-calico\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.397439 kubelet[2663]: I0910 04:52:45.397330 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-node-certs\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.397439 kubelet[2663]: I0910 04:52:45.397363 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-var-run-calico\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.397591 kubelet[2663]: I0910 04:52:45.397392 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-flexvol-driver-host\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.397591 kubelet[2663]: I0910 04:52:45.397541 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-tigera-ca-bundle\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.397591 kubelet[2663]: I0910 04:52:45.397560 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-lib-modules\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.397766 kubelet[2663]: I0910 04:52:45.397575 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-xtables-lock\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.397889 kubelet[2663]: I0910 04:52:45.397833 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-cni-net-dir\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.397889 kubelet[2663]: I0910 04:52:45.397857 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-policysync\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.398019 kubelet[2663]: I0910 04:52:45.397875 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-cni-bin-dir\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.398197 kubelet[2663]: I0910 04:52:45.398182 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-cni-log-dir\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.398296 kubelet[2663]: I0910 04:52:45.398283 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlmvh\" (UniqueName: \"kubernetes.io/projected/d185ae9a-8bef-44b6-ad7d-34a5ae8782bb-kube-api-access-vlmvh\") pod \"calico-node-z7mdk\" (UID: \"d185ae9a-8bef-44b6-ad7d-34a5ae8782bb\") " pod="calico-system/calico-node-z7mdk" Sep 10 04:52:45.421495 containerd[1503]: time="2025-09-10T04:52:45.421435812Z" level=info msg="connecting to shim aafba76315b9bc379f9fc05c04729983ebf4441d6f207d1a14befc7576d83e3b" address="unix:///run/containerd/s/704c6362c425912f81532a750177acab8c046dd03666a77808b505d1a2ff967e" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:52:45.475140 systemd[1]: Started cri-containerd-aafba76315b9bc379f9fc05c04729983ebf4441d6f207d1a14befc7576d83e3b.scope - libcontainer container aafba76315b9bc379f9fc05c04729983ebf4441d6f207d1a14befc7576d83e3b. Sep 10 04:52:45.500768 kubelet[2663]: E0910 04:52:45.500612 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.500768 kubelet[2663]: W0910 04:52:45.500749 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.513252 kubelet[2663]: E0910 04:52:45.513127 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.513957 kubelet[2663]: E0910 04:52:45.513374 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.513957 kubelet[2663]: W0910 04:52:45.513391 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.513957 kubelet[2663]: E0910 04:52:45.513482 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.513957 kubelet[2663]: E0910 04:52:45.513694 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.513957 kubelet[2663]: W0910 04:52:45.513735 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.513957 kubelet[2663]: E0910 04:52:45.513768 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.515963 kubelet[2663]: E0910 04:52:45.514532 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.516023 kubelet[2663]: W0910 04:52:45.515967 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.516023 kubelet[2663]: E0910 04:52:45.515994 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.516234 kubelet[2663]: E0910 04:52:45.516219 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.516234 kubelet[2663]: W0910 04:52:45.516233 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.516291 kubelet[2663]: E0910 04:52:45.516242 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.516657 kubelet[2663]: E0910 04:52:45.516566 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.516657 kubelet[2663]: W0910 04:52:45.516582 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.516657 kubelet[2663]: E0910 04:52:45.516593 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.517588 kubelet[2663]: E0910 04:52:45.517553 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.517588 kubelet[2663]: W0910 04:52:45.517572 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.517588 kubelet[2663]: E0910 04:52:45.517587 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.519826 containerd[1503]: time="2025-09-10T04:52:45.519776601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-694458ffc9-nl7t9,Uid:9f29e05e-8235-40c4-9ee4-be5fefbf2c01,Namespace:calico-system,Attempt:0,} returns sandbox id \"aafba76315b9bc379f9fc05c04729983ebf4441d6f207d1a14befc7576d83e3b\"" Sep 10 04:52:45.522841 containerd[1503]: time="2025-09-10T04:52:45.522802505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 04:52:45.632005 kubelet[2663]: E0910 04:52:45.630777 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbzqr" podUID="f6ad6904-9c3f-4923-ac04-3f47e122a4d0" Sep 10 04:52:45.658460 containerd[1503]: time="2025-09-10T04:52:45.658417779Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7mdk,Uid:d185ae9a-8bef-44b6-ad7d-34a5ae8782bb,Namespace:calico-system,Attempt:0,}" Sep 10 04:52:45.675973 kubelet[2663]: E0910 04:52:45.675921 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.675973 kubelet[2663]: W0910 04:52:45.675966 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.676136 kubelet[2663]: E0910 04:52:45.675988 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.676199 kubelet[2663]: E0910 04:52:45.676186 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.680084 kubelet[2663]: W0910 04:52:45.676284 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.680084 kubelet[2663]: E0910 04:52:45.680018 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.680462 kubelet[2663]: E0910 04:52:45.680346 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.680462 kubelet[2663]: W0910 04:52:45.680368 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.680462 kubelet[2663]: E0910 04:52:45.680381 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.680792 kubelet[2663]: E0910 04:52:45.680661 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.680792 kubelet[2663]: W0910 04:52:45.680675 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.680792 kubelet[2663]: E0910 04:52:45.680687 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.681001 kubelet[2663]: E0910 04:52:45.680987 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.681076 kubelet[2663]: W0910 04:52:45.681063 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.681163 kubelet[2663]: E0910 04:52:45.681148 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.681535 kubelet[2663]: E0910 04:52:45.681444 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.681535 kubelet[2663]: W0910 04:52:45.681456 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.681535 kubelet[2663]: E0910 04:52:45.681471 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.681661 kubelet[2663]: E0910 04:52:45.681649 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.681719 kubelet[2663]: W0910 04:52:45.681708 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.681783 kubelet[2663]: E0910 04:52:45.681771 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.682623 kubelet[2663]: E0910 04:52:45.682452 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.682623 kubelet[2663]: W0910 04:52:45.682470 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.682623 kubelet[2663]: E0910 04:52:45.682496 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.683124 kubelet[2663]: E0910 04:52:45.682993 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.683124 kubelet[2663]: W0910 04:52:45.683012 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.683124 kubelet[2663]: E0910 04:52:45.683035 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.685164 kubelet[2663]: E0910 04:52:45.684994 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.685164 kubelet[2663]: W0910 04:52:45.685018 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.685164 kubelet[2663]: E0910 04:52:45.685034 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.685290 kubelet[2663]: E0910 04:52:45.685256 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.685290 kubelet[2663]: W0910 04:52:45.685266 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.685290 kubelet[2663]: E0910 04:52:45.685275 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.685839 kubelet[2663]: E0910 04:52:45.685556 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.685839 kubelet[2663]: W0910 04:52:45.685573 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.685839 kubelet[2663]: E0910 04:52:45.685584 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.686137 kubelet[2663]: E0910 04:52:45.685998 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.686137 kubelet[2663]: W0910 04:52:45.686013 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.686137 kubelet[2663]: E0910 04:52:45.686032 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.686385 kubelet[2663]: E0910 04:52:45.686361 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.686464 kubelet[2663]: W0910 04:52:45.686449 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.686530 kubelet[2663]: E0910 04:52:45.686520 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.687358 kubelet[2663]: E0910 04:52:45.687248 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.687358 kubelet[2663]: W0910 04:52:45.687271 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.687358 kubelet[2663]: E0910 04:52:45.687285 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.688188 kubelet[2663]: E0910 04:52:45.688162 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.688188 kubelet[2663]: W0910 04:52:45.688184 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.688283 kubelet[2663]: E0910 04:52:45.688198 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.688522 kubelet[2663]: E0910 04:52:45.688500 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.688522 kubelet[2663]: W0910 04:52:45.688512 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.688646 kubelet[2663]: E0910 04:52:45.688528 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.688770 kubelet[2663]: E0910 04:52:45.688704 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.688770 kubelet[2663]: W0910 04:52:45.688716 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.688770 kubelet[2663]: E0910 04:52:45.688725 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.689378 kubelet[2663]: E0910 04:52:45.689356 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.689378 kubelet[2663]: W0910 04:52:45.689372 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.689378 kubelet[2663]: E0910 04:52:45.689384 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.689945 kubelet[2663]: E0910 04:52:45.689786 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.689945 kubelet[2663]: W0910 04:52:45.689906 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.689945 kubelet[2663]: E0910 04:52:45.689924 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.692903 containerd[1503]: time="2025-09-10T04:52:45.692775643Z" level=info msg="connecting to shim 9b0fd4fc47e782cbbfd8a9a64953e02f3d165fcb49a5e29e4dcf54f32db2f6ed" address="unix:///run/containerd/s/54dc299561ebe177088480a358faeefc0e96623a8f7cc850e23bd73bcdad51e9" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:52:45.700197 kubelet[2663]: E0910 04:52:45.700165 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.700319 kubelet[2663]: W0910 04:52:45.700277 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.700319 kubelet[2663]: E0910 04:52:45.700299 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.700470 kubelet[2663]: I0910 04:52:45.700419 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f6ad6904-9c3f-4923-ac04-3f47e122a4d0-kubelet-dir\") pod \"csi-node-driver-cbzqr\" (UID: \"f6ad6904-9c3f-4923-ac04-3f47e122a4d0\") " pod="calico-system/csi-node-driver-cbzqr" Sep 10 04:52:45.701277 kubelet[2663]: E0910 04:52:45.700873 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.701277 kubelet[2663]: W0910 04:52:45.700895 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.701277 kubelet[2663]: E0910 04:52:45.701095 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.701277 kubelet[2663]: I0910 04:52:45.701153 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f6ad6904-9c3f-4923-ac04-3f47e122a4d0-socket-dir\") pod \"csi-node-driver-cbzqr\" (UID: \"f6ad6904-9c3f-4923-ac04-3f47e122a4d0\") " pod="calico-system/csi-node-driver-cbzqr" Sep 10 04:52:45.701728 kubelet[2663]: E0910 04:52:45.701628 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.701728 kubelet[2663]: W0910 04:52:45.701645 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.701728 kubelet[2663]: E0910 04:52:45.701657 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.701728 kubelet[2663]: I0910 04:52:45.701685 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9l8f\" (UniqueName: \"kubernetes.io/projected/f6ad6904-9c3f-4923-ac04-3f47e122a4d0-kube-api-access-q9l8f\") pod \"csi-node-driver-cbzqr\" (UID: \"f6ad6904-9c3f-4923-ac04-3f47e122a4d0\") " pod="calico-system/csi-node-driver-cbzqr" Sep 10 04:52:45.702410 kubelet[2663]: E0910 04:52:45.702257 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.702410 kubelet[2663]: W0910 04:52:45.702275 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.702410 kubelet[2663]: E0910 04:52:45.702288 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.702410 kubelet[2663]: I0910 04:52:45.702346 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f6ad6904-9c3f-4923-ac04-3f47e122a4d0-registration-dir\") pod \"csi-node-driver-cbzqr\" (UID: \"f6ad6904-9c3f-4923-ac04-3f47e122a4d0\") " pod="calico-system/csi-node-driver-cbzqr" Sep 10 04:52:45.702906 kubelet[2663]: E0910 04:52:45.702852 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.703023 kubelet[2663]: W0910 04:52:45.703007 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.703149 kubelet[2663]: E0910 04:52:45.703135 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.703987 kubelet[2663]: E0910 04:52:45.703774 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.703987 kubelet[2663]: W0910 04:52:45.703791 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.703987 kubelet[2663]: E0910 04:52:45.703804 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.704610 kubelet[2663]: E0910 04:52:45.704167 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.704610 kubelet[2663]: W0910 04:52:45.704182 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.704773 kubelet[2663]: E0910 04:52:45.704194 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.706065 kubelet[2663]: E0910 04:52:45.706046 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.706195 kubelet[2663]: W0910 04:52:45.706146 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.706195 kubelet[2663]: E0910 04:52:45.706162 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.706374 kubelet[2663]: I0910 04:52:45.706350 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f6ad6904-9c3f-4923-ac04-3f47e122a4d0-varrun\") pod \"csi-node-driver-cbzqr\" (UID: \"f6ad6904-9c3f-4923-ac04-3f47e122a4d0\") " pod="calico-system/csi-node-driver-cbzqr" Sep 10 04:52:45.706720 kubelet[2663]: E0910 04:52:45.706610 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.706720 kubelet[2663]: W0910 04:52:45.706622 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.706720 kubelet[2663]: E0910 04:52:45.706640 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.706883 kubelet[2663]: E0910 04:52:45.706870 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.706978 kubelet[2663]: W0910 04:52:45.706923 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.707032 kubelet[2663]: E0910 04:52:45.707019 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.707446 kubelet[2663]: E0910 04:52:45.707319 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.707446 kubelet[2663]: W0910 04:52:45.707353 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.707446 kubelet[2663]: E0910 04:52:45.707364 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.707730 kubelet[2663]: E0910 04:52:45.707603 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.707730 kubelet[2663]: W0910 04:52:45.707619 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.707730 kubelet[2663]: E0910 04:52:45.707628 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.707885 kubelet[2663]: E0910 04:52:45.707872 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.708137 kubelet[2663]: W0910 04:52:45.707924 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.708137 kubelet[2663]: E0910 04:52:45.707998 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.708294 kubelet[2663]: E0910 04:52:45.708279 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.708351 kubelet[2663]: W0910 04:52:45.708340 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.708431 kubelet[2663]: E0910 04:52:45.708403 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.708699 kubelet[2663]: E0910 04:52:45.708652 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.708699 kubelet[2663]: W0910 04:52:45.708665 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.708848 kubelet[2663]: E0910 04:52:45.708805 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.743119 systemd[1]: Started cri-containerd-9b0fd4fc47e782cbbfd8a9a64953e02f3d165fcb49a5e29e4dcf54f32db2f6ed.scope - libcontainer container 9b0fd4fc47e782cbbfd8a9a64953e02f3d165fcb49a5e29e4dcf54f32db2f6ed. Sep 10 04:52:45.778461 containerd[1503]: time="2025-09-10T04:52:45.778423355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-z7mdk,Uid:d185ae9a-8bef-44b6-ad7d-34a5ae8782bb,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b0fd4fc47e782cbbfd8a9a64953e02f3d165fcb49a5e29e4dcf54f32db2f6ed\"" Sep 10 04:52:45.809371 kubelet[2663]: E0910 04:52:45.809329 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.809371 kubelet[2663]: W0910 04:52:45.809352 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.809371 kubelet[2663]: E0910 04:52:45.809373 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.809944 kubelet[2663]: E0910 04:52:45.809542 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.809944 kubelet[2663]: W0910 04:52:45.809550 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.809944 kubelet[2663]: E0910 04:52:45.809558 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.809944 kubelet[2663]: E0910 04:52:45.809743 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.809944 kubelet[2663]: W0910 04:52:45.809750 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.809944 kubelet[2663]: E0910 04:52:45.809757 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.809944 kubelet[2663]: E0910 04:52:45.809883 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.809944 kubelet[2663]: W0910 04:52:45.809889 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.809944 kubelet[2663]: E0910 04:52:45.809896 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.810330 kubelet[2663]: E0910 04:52:45.810062 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.810330 kubelet[2663]: W0910 04:52:45.810070 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.810330 kubelet[2663]: E0910 04:52:45.810078 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.810405 kubelet[2663]: E0910 04:52:45.810382 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.810405 kubelet[2663]: W0910 04:52:45.810397 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.810470 kubelet[2663]: E0910 04:52:45.810410 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.810586 kubelet[2663]: E0910 04:52:45.810576 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.810614 kubelet[2663]: W0910 04:52:45.810587 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.810614 kubelet[2663]: E0910 04:52:45.810595 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.810736 kubelet[2663]: E0910 04:52:45.810726 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.810736 kubelet[2663]: W0910 04:52:45.810735 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.810810 kubelet[2663]: E0910 04:52:45.810743 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.810889 kubelet[2663]: E0910 04:52:45.810879 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.810914 kubelet[2663]: W0910 04:52:45.810889 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.810914 kubelet[2663]: E0910 04:52:45.810897 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.811045 kubelet[2663]: E0910 04:52:45.811035 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.811045 kubelet[2663]: W0910 04:52:45.811044 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.811136 kubelet[2663]: E0910 04:52:45.811052 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.811200 kubelet[2663]: E0910 04:52:45.811189 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.811200 kubelet[2663]: W0910 04:52:45.811200 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.811260 kubelet[2663]: E0910 04:52:45.811208 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.811711 kubelet[2663]: E0910 04:52:45.811691 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.811711 kubelet[2663]: W0910 04:52:45.811706 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.811988 kubelet[2663]: E0910 04:52:45.811721 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.812154 kubelet[2663]: E0910 04:52:45.812118 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.812154 kubelet[2663]: W0910 04:52:45.812135 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.812154 kubelet[2663]: E0910 04:52:45.812147 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.812470 kubelet[2663]: E0910 04:52:45.812419 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.812470 kubelet[2663]: W0910 04:52:45.812429 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.812470 kubelet[2663]: E0910 04:52:45.812439 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.814020 kubelet[2663]: E0910 04:52:45.814002 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.814020 kubelet[2663]: W0910 04:52:45.814019 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.814101 kubelet[2663]: E0910 04:52:45.814034 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.814269 kubelet[2663]: E0910 04:52:45.814255 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.814269 kubelet[2663]: W0910 04:52:45.814267 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.814407 kubelet[2663]: E0910 04:52:45.814277 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.814585 kubelet[2663]: E0910 04:52:45.814568 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.814585 kubelet[2663]: W0910 04:52:45.814584 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.814657 kubelet[2663]: E0910 04:52:45.814595 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.815580 kubelet[2663]: E0910 04:52:45.815557 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.815580 kubelet[2663]: W0910 04:52:45.815576 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.815678 kubelet[2663]: E0910 04:52:45.815590 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.816253 kubelet[2663]: E0910 04:52:45.816232 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.816253 kubelet[2663]: W0910 04:52:45.816247 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.816329 kubelet[2663]: E0910 04:52:45.816259 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.816777 kubelet[2663]: E0910 04:52:45.816756 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.816777 kubelet[2663]: W0910 04:52:45.816771 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.816870 kubelet[2663]: E0910 04:52:45.816785 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.817897 kubelet[2663]: E0910 04:52:45.817870 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.817897 kubelet[2663]: W0910 04:52:45.817888 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.817897 kubelet[2663]: E0910 04:52:45.817900 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.818416 kubelet[2663]: E0910 04:52:45.818395 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.818416 kubelet[2663]: W0910 04:52:45.818410 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.818491 kubelet[2663]: E0910 04:52:45.818422 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.819954 kubelet[2663]: E0910 04:52:45.819324 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.819954 kubelet[2663]: W0910 04:52:45.819340 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.819954 kubelet[2663]: E0910 04:52:45.819352 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.819954 kubelet[2663]: E0910 04:52:45.819744 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.819954 kubelet[2663]: W0910 04:52:45.819755 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.820102 kubelet[2663]: E0910 04:52:45.819768 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.820672 kubelet[2663]: E0910 04:52:45.820545 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.820672 kubelet[2663]: W0910 04:52:45.820562 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.820672 kubelet[2663]: E0910 04:52:45.820574 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:45.837565 kubelet[2663]: E0910 04:52:45.837538 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:45.837565 kubelet[2663]: W0910 04:52:45.837557 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:45.837565 kubelet[2663]: E0910 04:52:45.837575 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:46.629401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436043108.mount: Deactivated successfully. Sep 10 04:52:47.089944 containerd[1503]: time="2025-09-10T04:52:47.089884476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:47.090545 containerd[1503]: time="2025-09-10T04:52:47.090512135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 10 04:52:47.091247 containerd[1503]: time="2025-09-10T04:52:47.091222038Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:47.099502 containerd[1503]: time="2025-09-10T04:52:47.099468617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:47.100710 containerd[1503]: time="2025-09-10T04:52:47.100591612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.577747226s" Sep 10 04:52:47.100710 containerd[1503]: time="2025-09-10T04:52:47.100631333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 04:52:47.102278 containerd[1503]: time="2025-09-10T04:52:47.101622325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 04:52:47.118494 containerd[1503]: time="2025-09-10T04:52:47.118454894Z" level=info msg="CreateContainer within sandbox \"aafba76315b9bc379f9fc05c04729983ebf4441d6f207d1a14befc7576d83e3b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 04:52:47.145760 containerd[1503]: time="2025-09-10T04:52:47.145672469Z" level=info msg="Container 5820d828307131255b3251fe6f071cb28791460c17fd2ea4a9f548933ee57bb2: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:52:47.159988 containerd[1503]: time="2025-09-10T04:52:47.159858395Z" level=info msg="CreateContainer within sandbox \"aafba76315b9bc379f9fc05c04729983ebf4441d6f207d1a14befc7576d83e3b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5820d828307131255b3251fe6f071cb28791460c17fd2ea4a9f548933ee57bb2\"" Sep 10 04:52:47.161047 containerd[1503]: time="2025-09-10T04:52:47.160991710Z" level=info msg="StartContainer for \"5820d828307131255b3251fe6f071cb28791460c17fd2ea4a9f548933ee57bb2\"" Sep 10 04:52:47.162143 containerd[1503]: time="2025-09-10T04:52:47.162071864Z" level=info msg="connecting to shim 5820d828307131255b3251fe6f071cb28791460c17fd2ea4a9f548933ee57bb2" address="unix:///run/containerd/s/704c6362c425912f81532a750177acab8c046dd03666a77808b505d1a2ff967e" protocol=ttrpc version=3 Sep 10 04:52:47.182106 systemd[1]: Started cri-containerd-5820d828307131255b3251fe6f071cb28791460c17fd2ea4a9f548933ee57bb2.scope - libcontainer container 5820d828307131255b3251fe6f071cb28791460c17fd2ea4a9f548933ee57bb2. Sep 10 04:52:47.239223 containerd[1503]: time="2025-09-10T04:52:47.239181728Z" level=info msg="StartContainer for \"5820d828307131255b3251fe6f071cb28791460c17fd2ea4a9f548933ee57bb2\" returns successfully" Sep 10 04:52:47.568911 kubelet[2663]: E0910 04:52:47.568857 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbzqr" podUID="f6ad6904-9c3f-4923-ac04-3f47e122a4d0" Sep 10 04:52:47.705557 kubelet[2663]: E0910 04:52:47.705523 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.705557 kubelet[2663]: W0910 04:52:47.705550 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.705850 kubelet[2663]: E0910 04:52:47.705772 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.706252 kubelet[2663]: E0910 04:52:47.706234 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.706252 kubelet[2663]: W0910 04:52:47.706251 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.706335 kubelet[2663]: E0910 04:52:47.706265 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.706903 kubelet[2663]: E0910 04:52:47.706636 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.706903 kubelet[2663]: W0910 04:52:47.706653 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.706903 kubelet[2663]: E0910 04:52:47.706664 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.706903 kubelet[2663]: E0910 04:52:47.706896 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.706903 kubelet[2663]: W0910 04:52:47.706905 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.707078 kubelet[2663]: E0910 04:52:47.706915 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.707153 kubelet[2663]: E0910 04:52:47.707131 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.707153 kubelet[2663]: W0910 04:52:47.707144 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.707153 kubelet[2663]: E0910 04:52:47.707153 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.707587 kubelet[2663]: E0910 04:52:47.707324 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.707587 kubelet[2663]: W0910 04:52:47.707339 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.707587 kubelet[2663]: E0910 04:52:47.707369 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.707587 kubelet[2663]: E0910 04:52:47.707571 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.707587 kubelet[2663]: W0910 04:52:47.707582 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.707740 kubelet[2663]: E0910 04:52:47.707594 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.707948 kubelet[2663]: E0910 04:52:47.707911 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.707987 kubelet[2663]: W0910 04:52:47.707961 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.707987 kubelet[2663]: E0910 04:52:47.707974 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.708253 kubelet[2663]: E0910 04:52:47.708233 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.708253 kubelet[2663]: W0910 04:52:47.708247 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.708320 kubelet[2663]: E0910 04:52:47.708256 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.708823 kubelet[2663]: E0910 04:52:47.708807 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.708823 kubelet[2663]: W0910 04:52:47.708822 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.708901 kubelet[2663]: E0910 04:52:47.708834 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.709126 kubelet[2663]: E0910 04:52:47.709098 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.709126 kubelet[2663]: W0910 04:52:47.709111 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.709126 kubelet[2663]: E0910 04:52:47.709121 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.709372 kubelet[2663]: E0910 04:52:47.709336 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.709372 kubelet[2663]: W0910 04:52:47.709351 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.709580 kubelet[2663]: E0910 04:52:47.709377 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.709670 kubelet[2663]: E0910 04:52:47.709652 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.709670 kubelet[2663]: W0910 04:52:47.709666 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.709724 kubelet[2663]: E0910 04:52:47.709676 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.709861 kubelet[2663]: E0910 04:52:47.709834 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.709909 kubelet[2663]: W0910 04:52:47.709879 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.709909 kubelet[2663]: E0910 04:52:47.709893 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.710373 kubelet[2663]: E0910 04:52:47.710123 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.710373 kubelet[2663]: W0910 04:52:47.710172 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.710373 kubelet[2663]: E0910 04:52:47.710184 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.724582 kubelet[2663]: E0910 04:52:47.724542 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.724582 kubelet[2663]: W0910 04:52:47.724566 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.724582 kubelet[2663]: E0910 04:52:47.724587 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.729670 kubelet[2663]: E0910 04:52:47.729637 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.729670 kubelet[2663]: W0910 04:52:47.729660 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.729737 kubelet[2663]: E0910 04:52:47.729677 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.730021 kubelet[2663]: E0910 04:52:47.729992 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.730021 kubelet[2663]: W0910 04:52:47.730015 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.730092 kubelet[2663]: E0910 04:52:47.730029 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.730246 kubelet[2663]: E0910 04:52:47.730231 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.730246 kubelet[2663]: W0910 04:52:47.730245 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.730296 kubelet[2663]: E0910 04:52:47.730255 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.730435 kubelet[2663]: E0910 04:52:47.730409 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.730435 kubelet[2663]: W0910 04:52:47.730421 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.730435 kubelet[2663]: E0910 04:52:47.730431 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.730634 kubelet[2663]: E0910 04:52:47.730620 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.730634 kubelet[2663]: W0910 04:52:47.730632 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.730687 kubelet[2663]: E0910 04:52:47.730641 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.730919 kubelet[2663]: E0910 04:52:47.730890 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.730919 kubelet[2663]: W0910 04:52:47.730906 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.730919 kubelet[2663]: E0910 04:52:47.730917 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.731150 kubelet[2663]: E0910 04:52:47.731134 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.731150 kubelet[2663]: W0910 04:52:47.731149 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.731200 kubelet[2663]: E0910 04:52:47.731167 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.731363 kubelet[2663]: E0910 04:52:47.731349 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.731363 kubelet[2663]: W0910 04:52:47.731361 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.731417 kubelet[2663]: E0910 04:52:47.731370 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.731527 kubelet[2663]: E0910 04:52:47.731514 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.731527 kubelet[2663]: W0910 04:52:47.731524 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.731571 kubelet[2663]: E0910 04:52:47.731533 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.731697 kubelet[2663]: E0910 04:52:47.731684 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.731727 kubelet[2663]: W0910 04:52:47.731697 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.731727 kubelet[2663]: E0910 04:52:47.731705 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.731948 kubelet[2663]: E0910 04:52:47.731919 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.731948 kubelet[2663]: W0910 04:52:47.731940 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.731995 kubelet[2663]: E0910 04:52:47.731949 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.732164 kubelet[2663]: E0910 04:52:47.732146 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.732201 kubelet[2663]: W0910 04:52:47.732158 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.732201 kubelet[2663]: E0910 04:52:47.732176 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.732403 kubelet[2663]: E0910 04:52:47.732388 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.732403 kubelet[2663]: W0910 04:52:47.732401 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.732452 kubelet[2663]: E0910 04:52:47.732410 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.732571 kubelet[2663]: E0910 04:52:47.732556 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.732602 kubelet[2663]: W0910 04:52:47.732570 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.732602 kubelet[2663]: E0910 04:52:47.732579 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.732748 kubelet[2663]: E0910 04:52:47.732733 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.732748 kubelet[2663]: W0910 04:52:47.732747 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.732798 kubelet[2663]: E0910 04:52:47.732755 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.733006 kubelet[2663]: E0910 04:52:47.732991 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.733038 kubelet[2663]: W0910 04:52:47.733006 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.733038 kubelet[2663]: E0910 04:52:47.733017 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:47.733336 kubelet[2663]: E0910 04:52:47.733301 2663 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 04:52:47.733336 kubelet[2663]: W0910 04:52:47.733321 2663 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 04:52:47.733336 kubelet[2663]: E0910 04:52:47.733332 2663 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 04:52:48.299647 containerd[1503]: time="2025-09-10T04:52:48.299610123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:48.300876 containerd[1503]: time="2025-09-10T04:52:48.300851520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 10 04:52:48.301446 containerd[1503]: time="2025-09-10T04:52:48.301415897Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:48.303611 containerd[1503]: time="2025-09-10T04:52:48.303578402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:48.304518 containerd[1503]: time="2025-09-10T04:52:48.304490230Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.202666138s" Sep 10 04:52:48.304574 containerd[1503]: time="2025-09-10T04:52:48.304521871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 04:52:48.309199 containerd[1503]: time="2025-09-10T04:52:48.309147250Z" level=info msg="CreateContainer within sandbox \"9b0fd4fc47e782cbbfd8a9a64953e02f3d165fcb49a5e29e4dcf54f32db2f6ed\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 04:52:48.320373 containerd[1503]: time="2025-09-10T04:52:48.320336306Z" level=info msg="Container b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:52:48.328682 containerd[1503]: time="2025-09-10T04:52:48.328650436Z" level=info msg="CreateContainer within sandbox \"9b0fd4fc47e782cbbfd8a9a64953e02f3d165fcb49a5e29e4dcf54f32db2f6ed\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5\"" Sep 10 04:52:48.332874 containerd[1503]: time="2025-09-10T04:52:48.332841842Z" level=info msg="StartContainer for \"b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5\"" Sep 10 04:52:48.334197 containerd[1503]: time="2025-09-10T04:52:48.334164161Z" level=info msg="connecting to shim b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5" address="unix:///run/containerd/s/54dc299561ebe177088480a358faeefc0e96623a8f7cc850e23bd73bcdad51e9" protocol=ttrpc version=3 Sep 10 04:52:48.351099 systemd[1]: Started cri-containerd-b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5.scope - libcontainer container b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5. Sep 10 04:52:48.397644 containerd[1503]: time="2025-09-10T04:52:48.397265818Z" level=info msg="StartContainer for \"b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5\" returns successfully" Sep 10 04:52:48.411357 systemd[1]: cri-containerd-b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5.scope: Deactivated successfully. Sep 10 04:52:48.435583 containerd[1503]: time="2025-09-10T04:52:48.435438965Z" level=info msg="received exit event container_id:\"b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5\" id:\"b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5\" pid:3379 exited_at:{seconds:1757479968 nanos:426354292}" Sep 10 04:52:48.435583 containerd[1503]: time="2025-09-10T04:52:48.435538328Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5\" id:\"b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5\" pid:3379 exited_at:{seconds:1757479968 nanos:426354292}" Sep 10 04:52:48.457059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b3e31487677d5aaf36396f3993a8de129c32a32f36a02cd8f8d1ae608cc940b5-rootfs.mount: Deactivated successfully. Sep 10 04:52:48.648562 kubelet[2663]: I0910 04:52:48.648295 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:52:48.650133 containerd[1503]: time="2025-09-10T04:52:48.649453156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 04:52:48.667887 kubelet[2663]: I0910 04:52:48.667816 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-694458ffc9-nl7t9" podStartSLOduration=2.089119451 podStartE2EDuration="3.667798027s" podCreationTimestamp="2025-09-10 04:52:45 +0000 UTC" firstStartedPulling="2025-09-10 04:52:45.522593778 +0000 UTC m=+21.042272771" lastFinishedPulling="2025-09-10 04:52:47.101272314 +0000 UTC m=+22.620951347" observedRunningTime="2025-09-10 04:52:47.653879081 +0000 UTC m=+23.173558114" watchObservedRunningTime="2025-09-10 04:52:48.667798027 +0000 UTC m=+24.187477060" Sep 10 04:52:49.568689 kubelet[2663]: E0910 04:52:49.568587 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbzqr" podUID="f6ad6904-9c3f-4923-ac04-3f47e122a4d0" Sep 10 04:52:51.569971 kubelet[2663]: E0910 04:52:51.569135 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cbzqr" podUID="f6ad6904-9c3f-4923-ac04-3f47e122a4d0" Sep 10 04:52:52.080662 containerd[1503]: time="2025-09-10T04:52:52.080624259Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:52.081087 containerd[1503]: time="2025-09-10T04:52:52.081054070Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 04:52:52.081899 containerd[1503]: time="2025-09-10T04:52:52.081845170Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:52.083961 containerd[1503]: time="2025-09-10T04:52:52.083682216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:52.084251 containerd[1503]: time="2025-09-10T04:52:52.084223350Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 3.434706592s" Sep 10 04:52:52.084310 containerd[1503]: time="2025-09-10T04:52:52.084260431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 04:52:52.089364 containerd[1503]: time="2025-09-10T04:52:52.089073753Z" level=info msg="CreateContainer within sandbox \"9b0fd4fc47e782cbbfd8a9a64953e02f3d165fcb49a5e29e4dcf54f32db2f6ed\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 04:52:52.095943 containerd[1503]: time="2025-09-10T04:52:52.095852925Z" level=info msg="Container 4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:52:52.103322 containerd[1503]: time="2025-09-10T04:52:52.103287953Z" level=info msg="CreateContainer within sandbox \"9b0fd4fc47e782cbbfd8a9a64953e02f3d165fcb49a5e29e4dcf54f32db2f6ed\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e\"" Sep 10 04:52:52.103813 containerd[1503]: time="2025-09-10T04:52:52.103739245Z" level=info msg="StartContainer for \"4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e\"" Sep 10 04:52:52.105091 containerd[1503]: time="2025-09-10T04:52:52.105065278Z" level=info msg="connecting to shim 4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e" address="unix:///run/containerd/s/54dc299561ebe177088480a358faeefc0e96623a8f7cc850e23bd73bcdad51e9" protocol=ttrpc version=3 Sep 10 04:52:52.127182 systemd[1]: Started cri-containerd-4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e.scope - libcontainer container 4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e. Sep 10 04:52:52.190627 containerd[1503]: time="2025-09-10T04:52:52.190398802Z" level=info msg="StartContainer for \"4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e\" returns successfully" Sep 10 04:52:52.770565 systemd[1]: cri-containerd-4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e.scope: Deactivated successfully. Sep 10 04:52:52.771307 systemd[1]: cri-containerd-4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e.scope: Consumed 448ms CPU time, 172.8M memory peak, 2M read from disk, 165.8M written to disk. Sep 10 04:52:52.780174 containerd[1503]: time="2025-09-10T04:52:52.780128431Z" level=info msg="received exit event container_id:\"4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e\" id:\"4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e\" pid:3439 exited_at:{seconds:1757479972 nanos:779888145}" Sep 10 04:52:52.780383 containerd[1503]: time="2025-09-10T04:52:52.780332796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e\" id:\"4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e\" pid:3439 exited_at:{seconds:1757479972 nanos:779888145}" Sep 10 04:52:52.798125 kubelet[2663]: I0910 04:52:52.797704 2663 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 04:52:52.801965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4faeaffb649b629a7c26cad7eecc8472e38d03462e4298de9a17c5c411a4715e-rootfs.mount: Deactivated successfully. Sep 10 04:52:52.897422 systemd[1]: Created slice kubepods-besteffort-pod1fb5c643_f892_4c08_9108_c5fec131485f.slice - libcontainer container kubepods-besteffort-pod1fb5c643_f892_4c08_9108_c5fec131485f.slice. Sep 10 04:52:52.906783 systemd[1]: Created slice kubepods-burstable-podfabeda3a_c65d_479a_a08f_68ff0fb2fbf3.slice - libcontainer container kubepods-burstable-podfabeda3a_c65d_479a_a08f_68ff0fb2fbf3.slice. Sep 10 04:52:52.915180 systemd[1]: Created slice kubepods-besteffort-pod0f6048bb_41e4_4c15_be39_3709646ca95d.slice - libcontainer container kubepods-besteffort-pod0f6048bb_41e4_4c15_be39_3709646ca95d.slice. Sep 10 04:52:52.918737 systemd[1]: Created slice kubepods-burstable-pod5133c5ff_97e8_40b7_b660_6b80eb06137d.slice - libcontainer container kubepods-burstable-pod5133c5ff_97e8_40b7_b660_6b80eb06137d.slice. Sep 10 04:52:52.922590 systemd[1]: Created slice kubepods-besteffort-podf6480a61_3bae_400b_9de6_f3b9a00f8b32.slice - libcontainer container kubepods-besteffort-podf6480a61_3bae_400b_9de6_f3b9a00f8b32.slice. Sep 10 04:52:52.931345 systemd[1]: Created slice kubepods-besteffort-podee94e84a_7be1_485c_b93b_ebd59544d3c9.slice - libcontainer container kubepods-besteffort-podee94e84a_7be1_485c_b93b_ebd59544d3c9.slice. Sep 10 04:52:52.934204 systemd[1]: Created slice kubepods-besteffort-pode3044a66_3d8c_451c_a79d_03a4a1f14800.slice - libcontainer container kubepods-besteffort-pode3044a66_3d8c_451c_a79d_03a4a1f14800.slice. Sep 10 04:52:52.972745 kubelet[2663]: I0910 04:52:52.972697 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv6mw\" (UniqueName: \"kubernetes.io/projected/ee94e84a-7be1-485c-b93b-ebd59544d3c9-kube-api-access-qv6mw\") pod \"calico-kube-controllers-7f747d4549-pwpt8\" (UID: \"ee94e84a-7be1-485c-b93b-ebd59544d3c9\") " pod="calico-system/calico-kube-controllers-7f747d4549-pwpt8" Sep 10 04:52:52.972745 kubelet[2663]: I0910 04:52:52.972745 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24vlc\" (UniqueName: \"kubernetes.io/projected/f6480a61-3bae-400b-9de6-f3b9a00f8b32-kube-api-access-24vlc\") pod \"whisker-65b9f559d4-b8fbm\" (UID: \"f6480a61-3bae-400b-9de6-f3b9a00f8b32\") " pod="calico-system/whisker-65b9f559d4-b8fbm" Sep 10 04:52:52.972916 kubelet[2663]: I0910 04:52:52.972764 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f6048bb-41e4-4c15-be39-3709646ca95d-config\") pod \"goldmane-54d579b49d-fhkl2\" (UID: \"0f6048bb-41e4-4c15-be39-3709646ca95d\") " pod="calico-system/goldmane-54d579b49d-fhkl2" Sep 10 04:52:52.972916 kubelet[2663]: I0910 04:52:52.972784 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f6048bb-41e4-4c15-be39-3709646ca95d-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-fhkl2\" (UID: \"0f6048bb-41e4-4c15-be39-3709646ca95d\") " pod="calico-system/goldmane-54d579b49d-fhkl2" Sep 10 04:52:52.972916 kubelet[2663]: I0910 04:52:52.972853 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5133c5ff-97e8-40b7-b660-6b80eb06137d-config-volume\") pod \"coredns-674b8bbfcf-vhhp5\" (UID: \"5133c5ff-97e8-40b7-b660-6b80eb06137d\") " pod="kube-system/coredns-674b8bbfcf-vhhp5" Sep 10 04:52:52.972916 kubelet[2663]: I0910 04:52:52.972898 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6480a61-3bae-400b-9de6-f3b9a00f8b32-whisker-ca-bundle\") pod \"whisker-65b9f559d4-b8fbm\" (UID: \"f6480a61-3bae-400b-9de6-f3b9a00f8b32\") " pod="calico-system/whisker-65b9f559d4-b8fbm" Sep 10 04:52:52.973027 kubelet[2663]: I0910 04:52:52.972924 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1fb5c643-f892-4c08-9108-c5fec131485f-calico-apiserver-certs\") pod \"calico-apiserver-5ddb8f475b-4t2gd\" (UID: \"1fb5c643-f892-4c08-9108-c5fec131485f\") " pod="calico-apiserver/calico-apiserver-5ddb8f475b-4t2gd" Sep 10 04:52:52.973027 kubelet[2663]: I0910 04:52:52.972956 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/0f6048bb-41e4-4c15-be39-3709646ca95d-goldmane-key-pair\") pod \"goldmane-54d579b49d-fhkl2\" (UID: \"0f6048bb-41e4-4c15-be39-3709646ca95d\") " pod="calico-system/goldmane-54d579b49d-fhkl2" Sep 10 04:52:52.973027 kubelet[2663]: I0910 04:52:52.972972 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s9ws\" (UniqueName: \"kubernetes.io/projected/0f6048bb-41e4-4c15-be39-3709646ca95d-kube-api-access-4s9ws\") pod \"goldmane-54d579b49d-fhkl2\" (UID: \"0f6048bb-41e4-4c15-be39-3709646ca95d\") " pod="calico-system/goldmane-54d579b49d-fhkl2" Sep 10 04:52:52.973027 kubelet[2663]: I0910 04:52:52.972993 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e3044a66-3d8c-451c-a79d-03a4a1f14800-calico-apiserver-certs\") pod \"calico-apiserver-5ddb8f475b-qmv9d\" (UID: \"e3044a66-3d8c-451c-a79d-03a4a1f14800\") " pod="calico-apiserver/calico-apiserver-5ddb8f475b-qmv9d" Sep 10 04:52:52.973027 kubelet[2663]: I0910 04:52:52.973009 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pspk\" (UniqueName: \"kubernetes.io/projected/fabeda3a-c65d-479a-a08f-68ff0fb2fbf3-kube-api-access-5pspk\") pod \"coredns-674b8bbfcf-c92kd\" (UID: \"fabeda3a-c65d-479a-a08f-68ff0fb2fbf3\") " pod="kube-system/coredns-674b8bbfcf-c92kd" Sep 10 04:52:52.973129 kubelet[2663]: I0910 04:52:52.973030 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slzn4\" (UniqueName: \"kubernetes.io/projected/1fb5c643-f892-4c08-9108-c5fec131485f-kube-api-access-slzn4\") pod \"calico-apiserver-5ddb8f475b-4t2gd\" (UID: \"1fb5c643-f892-4c08-9108-c5fec131485f\") " pod="calico-apiserver/calico-apiserver-5ddb8f475b-4t2gd" Sep 10 04:52:52.973129 kubelet[2663]: I0910 04:52:52.973049 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxlf5\" (UniqueName: \"kubernetes.io/projected/e3044a66-3d8c-451c-a79d-03a4a1f14800-kube-api-access-gxlf5\") pod \"calico-apiserver-5ddb8f475b-qmv9d\" (UID: \"e3044a66-3d8c-451c-a79d-03a4a1f14800\") " pod="calico-apiserver/calico-apiserver-5ddb8f475b-qmv9d" Sep 10 04:52:52.973129 kubelet[2663]: I0910 04:52:52.973064 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fabeda3a-c65d-479a-a08f-68ff0fb2fbf3-config-volume\") pod \"coredns-674b8bbfcf-c92kd\" (UID: \"fabeda3a-c65d-479a-a08f-68ff0fb2fbf3\") " pod="kube-system/coredns-674b8bbfcf-c92kd" Sep 10 04:52:52.973188 kubelet[2663]: I0910 04:52:52.973117 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ee94e84a-7be1-485c-b93b-ebd59544d3c9-tigera-ca-bundle\") pod \"calico-kube-controllers-7f747d4549-pwpt8\" (UID: \"ee94e84a-7be1-485c-b93b-ebd59544d3c9\") " pod="calico-system/calico-kube-controllers-7f747d4549-pwpt8" Sep 10 04:52:52.973188 kubelet[2663]: I0910 04:52:52.973152 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f6480a61-3bae-400b-9de6-f3b9a00f8b32-whisker-backend-key-pair\") pod \"whisker-65b9f559d4-b8fbm\" (UID: \"f6480a61-3bae-400b-9de6-f3b9a00f8b32\") " pod="calico-system/whisker-65b9f559d4-b8fbm" Sep 10 04:52:52.973229 kubelet[2663]: I0910 04:52:52.973186 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5m87t\" (UniqueName: \"kubernetes.io/projected/5133c5ff-97e8-40b7-b660-6b80eb06137d-kube-api-access-5m87t\") pod \"coredns-674b8bbfcf-vhhp5\" (UID: \"5133c5ff-97e8-40b7-b660-6b80eb06137d\") " pod="kube-system/coredns-674b8bbfcf-vhhp5" Sep 10 04:52:53.203182 containerd[1503]: time="2025-09-10T04:52:53.203068631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddb8f475b-4t2gd,Uid:1fb5c643-f892-4c08-9108-c5fec131485f,Namespace:calico-apiserver,Attempt:0,}" Sep 10 04:52:53.209634 containerd[1503]: time="2025-09-10T04:52:53.209597270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c92kd,Uid:fabeda3a-c65d-479a-a08f-68ff0fb2fbf3,Namespace:kube-system,Attempt:0,}" Sep 10 04:52:53.221328 containerd[1503]: time="2025-09-10T04:52:53.221261874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fhkl2,Uid:0f6048bb-41e4-4c15-be39-3709646ca95d,Namespace:calico-system,Attempt:0,}" Sep 10 04:52:53.222887 containerd[1503]: time="2025-09-10T04:52:53.222850512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vhhp5,Uid:5133c5ff-97e8-40b7-b660-6b80eb06137d,Namespace:kube-system,Attempt:0,}" Sep 10 04:52:53.232033 containerd[1503]: time="2025-09-10T04:52:53.231993255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65b9f559d4-b8fbm,Uid:f6480a61-3bae-400b-9de6-f3b9a00f8b32,Namespace:calico-system,Attempt:0,}" Sep 10 04:52:53.239048 containerd[1503]: time="2025-09-10T04:52:53.238662497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f747d4549-pwpt8,Uid:ee94e84a-7be1-485c-b93b-ebd59544d3c9,Namespace:calico-system,Attempt:0,}" Sep 10 04:52:53.239945 containerd[1503]: time="2025-09-10T04:52:53.239866287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddb8f475b-qmv9d,Uid:e3044a66-3d8c-451c-a79d-03a4a1f14800,Namespace:calico-apiserver,Attempt:0,}" Sep 10 04:52:53.335307 containerd[1503]: time="2025-09-10T04:52:53.335236529Z" level=error msg="Failed to destroy network for sandbox \"29f12e4889839716609b7c5824d96f71072d713fe7f87e87ae81c6b36fe7c8c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.335710 containerd[1503]: time="2025-09-10T04:52:53.335678500Z" level=error msg="Failed to destroy network for sandbox \"366530e8f69f51755ae72f9f383aa728dbbc232324edfd4450f3910d635fec8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.341555 containerd[1503]: time="2025-09-10T04:52:53.341331718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vhhp5,Uid:5133c5ff-97e8-40b7-b660-6b80eb06137d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f12e4889839716609b7c5824d96f71072d713fe7f87e87ae81c6b36fe7c8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.344668 containerd[1503]: time="2025-09-10T04:52:53.344614958Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddb8f475b-4t2gd,Uid:1fb5c643-f892-4c08-9108-c5fec131485f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"366530e8f69f51755ae72f9f383aa728dbbc232324edfd4450f3910d635fec8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.347963 kubelet[2663]: E0910 04:52:53.346372 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"366530e8f69f51755ae72f9f383aa728dbbc232324edfd4450f3910d635fec8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.347963 kubelet[2663]: E0910 04:52:53.346445 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"366530e8f69f51755ae72f9f383aa728dbbc232324edfd4450f3910d635fec8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddb8f475b-4t2gd" Sep 10 04:52:53.347963 kubelet[2663]: E0910 04:52:53.346467 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"366530e8f69f51755ae72f9f383aa728dbbc232324edfd4450f3910d635fec8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddb8f475b-4t2gd" Sep 10 04:52:53.348121 kubelet[2663]: E0910 04:52:53.346520 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddb8f475b-4t2gd_calico-apiserver(1fb5c643-f892-4c08-9108-c5fec131485f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddb8f475b-4t2gd_calico-apiserver(1fb5c643-f892-4c08-9108-c5fec131485f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"366530e8f69f51755ae72f9f383aa728dbbc232324edfd4450f3910d635fec8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddb8f475b-4t2gd" podUID="1fb5c643-f892-4c08-9108-c5fec131485f" Sep 10 04:52:53.348121 kubelet[2663]: E0910 04:52:53.347109 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f12e4889839716609b7c5824d96f71072d713fe7f87e87ae81c6b36fe7c8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.348121 kubelet[2663]: E0910 04:52:53.347142 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f12e4889839716609b7c5824d96f71072d713fe7f87e87ae81c6b36fe7c8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vhhp5" Sep 10 04:52:53.348217 kubelet[2663]: E0910 04:52:53.347158 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29f12e4889839716609b7c5824d96f71072d713fe7f87e87ae81c6b36fe7c8c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vhhp5" Sep 10 04:52:53.348217 kubelet[2663]: E0910 04:52:53.347186 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vhhp5_kube-system(5133c5ff-97e8-40b7-b660-6b80eb06137d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vhhp5_kube-system(5133c5ff-97e8-40b7-b660-6b80eb06137d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29f12e4889839716609b7c5824d96f71072d713fe7f87e87ae81c6b36fe7c8c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vhhp5" podUID="5133c5ff-97e8-40b7-b660-6b80eb06137d" Sep 10 04:52:53.360955 containerd[1503]: time="2025-09-10T04:52:53.360152056Z" level=error msg="Failed to destroy network for sandbox \"efac60080f3a25572c8e20beccc7f215337587b9ffb7cbaf19400d7ac2f41f1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.361079 containerd[1503]: time="2025-09-10T04:52:53.361021517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c92kd,Uid:fabeda3a-c65d-479a-a08f-68ff0fb2fbf3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"efac60080f3a25572c8e20beccc7f215337587b9ffb7cbaf19400d7ac2f41f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.361237 containerd[1503]: time="2025-09-10T04:52:53.361205522Z" level=error msg="Failed to destroy network for sandbox \"1e6d6cb7265ebeb850776bfcc333208450266294a57c09d9e0350bd75332731a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.361393 kubelet[2663]: E0910 04:52:53.361361 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efac60080f3a25572c8e20beccc7f215337587b9ffb7cbaf19400d7ac2f41f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.361447 kubelet[2663]: E0910 04:52:53.361408 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efac60080f3a25572c8e20beccc7f215337587b9ffb7cbaf19400d7ac2f41f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c92kd" Sep 10 04:52:53.361447 kubelet[2663]: E0910 04:52:53.361427 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"efac60080f3a25572c8e20beccc7f215337587b9ffb7cbaf19400d7ac2f41f1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-c92kd" Sep 10 04:52:53.361498 kubelet[2663]: E0910 04:52:53.361467 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-c92kd_kube-system(fabeda3a-c65d-479a-a08f-68ff0fb2fbf3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-c92kd_kube-system(fabeda3a-c65d-479a-a08f-68ff0fb2fbf3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"efac60080f3a25572c8e20beccc7f215337587b9ffb7cbaf19400d7ac2f41f1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-c92kd" podUID="fabeda3a-c65d-479a-a08f-68ff0fb2fbf3" Sep 10 04:52:53.362787 containerd[1503]: time="2025-09-10T04:52:53.362741559Z" level=error msg="Failed to destroy network for sandbox \"a6663b1f685e19313ed634dd7f61cfce4898e28482685cc43d54946d4efddf84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.363255 containerd[1503]: time="2025-09-10T04:52:53.363222171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f747d4549-pwpt8,Uid:ee94e84a-7be1-485c-b93b-ebd59544d3c9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e6d6cb7265ebeb850776bfcc333208450266294a57c09d9e0350bd75332731a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.363511 kubelet[2663]: E0910 04:52:53.363483 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e6d6cb7265ebeb850776bfcc333208450266294a57c09d9e0350bd75332731a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.363689 kubelet[2663]: E0910 04:52:53.363668 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e6d6cb7265ebeb850776bfcc333208450266294a57c09d9e0350bd75332731a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f747d4549-pwpt8" Sep 10 04:52:53.363763 kubelet[2663]: E0910 04:52:53.363747 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e6d6cb7265ebeb850776bfcc333208450266294a57c09d9e0350bd75332731a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f747d4549-pwpt8" Sep 10 04:52:53.363865 kubelet[2663]: E0910 04:52:53.363839 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f747d4549-pwpt8_calico-system(ee94e84a-7be1-485c-b93b-ebd59544d3c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f747d4549-pwpt8_calico-system(ee94e84a-7be1-485c-b93b-ebd59544d3c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e6d6cb7265ebeb850776bfcc333208450266294a57c09d9e0350bd75332731a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f747d4549-pwpt8" podUID="ee94e84a-7be1-485c-b93b-ebd59544d3c9" Sep 10 04:52:53.364052 containerd[1503]: time="2025-09-10T04:52:53.363991229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddb8f475b-qmv9d,Uid:e3044a66-3d8c-451c-a79d-03a4a1f14800,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6663b1f685e19313ed634dd7f61cfce4898e28482685cc43d54946d4efddf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.364183 kubelet[2663]: E0910 04:52:53.364157 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6663b1f685e19313ed634dd7f61cfce4898e28482685cc43d54946d4efddf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.364225 kubelet[2663]: E0910 04:52:53.364192 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6663b1f685e19313ed634dd7f61cfce4898e28482685cc43d54946d4efddf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddb8f475b-qmv9d" Sep 10 04:52:53.364225 kubelet[2663]: E0910 04:52:53.364207 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6663b1f685e19313ed634dd7f61cfce4898e28482685cc43d54946d4efddf84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5ddb8f475b-qmv9d" Sep 10 04:52:53.364269 kubelet[2663]: E0910 04:52:53.364238 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5ddb8f475b-qmv9d_calico-apiserver(e3044a66-3d8c-451c-a79d-03a4a1f14800)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5ddb8f475b-qmv9d_calico-apiserver(e3044a66-3d8c-451c-a79d-03a4a1f14800)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6663b1f685e19313ed634dd7f61cfce4898e28482685cc43d54946d4efddf84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5ddb8f475b-qmv9d" podUID="e3044a66-3d8c-451c-a79d-03a4a1f14800" Sep 10 04:52:53.368405 containerd[1503]: time="2025-09-10T04:52:53.367370792Z" level=error msg="Failed to destroy network for sandbox \"40f68d03fcfe218c5b30239d9574a4462eabdb2314e2d007129f3d2ddeb08490\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.368405 containerd[1503]: time="2025-09-10T04:52:53.368316815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fhkl2,Uid:0f6048bb-41e4-4c15-be39-3709646ca95d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40f68d03fcfe218c5b30239d9574a4462eabdb2314e2d007129f3d2ddeb08490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.368654 kubelet[2663]: E0910 04:52:53.368620 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40f68d03fcfe218c5b30239d9574a4462eabdb2314e2d007129f3d2ddeb08490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.368703 kubelet[2663]: E0910 04:52:53.368661 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40f68d03fcfe218c5b30239d9574a4462eabdb2314e2d007129f3d2ddeb08490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-fhkl2" Sep 10 04:52:53.368703 kubelet[2663]: E0910 04:52:53.368681 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40f68d03fcfe218c5b30239d9574a4462eabdb2314e2d007129f3d2ddeb08490\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-fhkl2" Sep 10 04:52:53.368755 kubelet[2663]: E0910 04:52:53.368712 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-fhkl2_calico-system(0f6048bb-41e4-4c15-be39-3709646ca95d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-fhkl2_calico-system(0f6048bb-41e4-4c15-be39-3709646ca95d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40f68d03fcfe218c5b30239d9574a4462eabdb2314e2d007129f3d2ddeb08490\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-fhkl2" podUID="0f6048bb-41e4-4c15-be39-3709646ca95d" Sep 10 04:52:53.370944 containerd[1503]: time="2025-09-10T04:52:53.370890917Z" level=error msg="Failed to destroy network for sandbox \"123dfb8dd0ced5c99afb50faa3696d84671dbefb2bac950fc4fde42b0eca59b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.371813 containerd[1503]: time="2025-09-10T04:52:53.371765619Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65b9f559d4-b8fbm,Uid:f6480a61-3bae-400b-9de6-f3b9a00f8b32,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"123dfb8dd0ced5c99afb50faa3696d84671dbefb2bac950fc4fde42b0eca59b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.372029 kubelet[2663]: E0910 04:52:53.372001 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"123dfb8dd0ced5c99afb50faa3696d84671dbefb2bac950fc4fde42b0eca59b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.372228 kubelet[2663]: E0910 04:52:53.372095 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"123dfb8dd0ced5c99afb50faa3696d84671dbefb2bac950fc4fde42b0eca59b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65b9f559d4-b8fbm" Sep 10 04:52:53.372228 kubelet[2663]: E0910 04:52:53.372114 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"123dfb8dd0ced5c99afb50faa3696d84671dbefb2bac950fc4fde42b0eca59b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65b9f559d4-b8fbm" Sep 10 04:52:53.372228 kubelet[2663]: E0910 04:52:53.372157 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65b9f559d4-b8fbm_calico-system(f6480a61-3bae-400b-9de6-f3b9a00f8b32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65b9f559d4-b8fbm_calico-system(f6480a61-3bae-400b-9de6-f3b9a00f8b32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"123dfb8dd0ced5c99afb50faa3696d84671dbefb2bac950fc4fde42b0eca59b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65b9f559d4-b8fbm" podUID="f6480a61-3bae-400b-9de6-f3b9a00f8b32" Sep 10 04:52:53.575617 systemd[1]: Created slice kubepods-besteffort-podf6ad6904_9c3f_4923_ac04_3f47e122a4d0.slice - libcontainer container kubepods-besteffort-podf6ad6904_9c3f_4923_ac04_3f47e122a4d0.slice. Sep 10 04:52:53.577905 containerd[1503]: time="2025-09-10T04:52:53.577871238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbzqr,Uid:f6ad6904-9c3f-4923-ac04-3f47e122a4d0,Namespace:calico-system,Attempt:0,}" Sep 10 04:52:53.617870 containerd[1503]: time="2025-09-10T04:52:53.617827331Z" level=error msg="Failed to destroy network for sandbox \"78e0c11dffc1b44b042d0e13fb2b4a6ae80fd31d2016991a14a35372bd633ce1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.618734 containerd[1503]: time="2025-09-10T04:52:53.618701712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbzqr,Uid:f6ad6904-9c3f-4923-ac04-3f47e122a4d0,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78e0c11dffc1b44b042d0e13fb2b4a6ae80fd31d2016991a14a35372bd633ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.619009 kubelet[2663]: E0910 04:52:53.618975 2663 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78e0c11dffc1b44b042d0e13fb2b4a6ae80fd31d2016991a14a35372bd633ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 04:52:53.619055 kubelet[2663]: E0910 04:52:53.619033 2663 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78e0c11dffc1b44b042d0e13fb2b4a6ae80fd31d2016991a14a35372bd633ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cbzqr" Sep 10 04:52:53.619091 kubelet[2663]: E0910 04:52:53.619057 2663 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78e0c11dffc1b44b042d0e13fb2b4a6ae80fd31d2016991a14a35372bd633ce1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cbzqr" Sep 10 04:52:53.619132 kubelet[2663]: E0910 04:52:53.619106 2663 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cbzqr_calico-system(f6ad6904-9c3f-4923-ac04-3f47e122a4d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cbzqr_calico-system(f6ad6904-9c3f-4923-ac04-3f47e122a4d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78e0c11dffc1b44b042d0e13fb2b4a6ae80fd31d2016991a14a35372bd633ce1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cbzqr" podUID="f6ad6904-9c3f-4923-ac04-3f47e122a4d0" Sep 10 04:52:53.669197 containerd[1503]: time="2025-09-10T04:52:53.669124260Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 04:52:54.096844 systemd[1]: run-netns-cni\x2dd9244f0c\x2d0b2e\x2d9061\x2d3b3d\x2de5ac6b499df2.mount: Deactivated successfully. Sep 10 04:52:54.096971 systemd[1]: run-netns-cni\x2d9e8c61c7\x2dfb71\x2d6686\x2d7d1c\x2d8afcc134973a.mount: Deactivated successfully. Sep 10 04:52:54.097023 systemd[1]: run-netns-cni\x2d781fe5d5\x2d82a6\x2d931e\x2d6f1b\x2de678bbbc3115.mount: Deactivated successfully. Sep 10 04:52:54.097068 systemd[1]: run-netns-cni\x2d5beccf3c\x2d9462\x2d8642\x2d6f4e\x2dbab550da0243.mount: Deactivated successfully. Sep 10 04:52:57.459334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2313533043.mount: Deactivated successfully. Sep 10 04:52:57.719031 containerd[1503]: time="2025-09-10T04:52:57.718603129Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 04:52:57.730894 containerd[1503]: time="2025-09-10T04:52:57.730847586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:57.732410 containerd[1503]: time="2025-09-10T04:52:57.732310496Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:57.734962 containerd[1503]: time="2025-09-10T04:52:57.731430238Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.062261457s" Sep 10 04:52:57.734962 containerd[1503]: time="2025-09-10T04:52:57.734816349Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 04:52:57.736767 containerd[1503]: time="2025-09-10T04:52:57.736720749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:52:57.756079 containerd[1503]: time="2025-09-10T04:52:57.756033673Z" level=info msg="CreateContainer within sandbox \"9b0fd4fc47e782cbbfd8a9a64953e02f3d165fcb49a5e29e4dcf54f32db2f6ed\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 04:52:57.765961 containerd[1503]: time="2025-09-10T04:52:57.764686094Z" level=info msg="Container c34b1e666a2572594561184f8f494bc4b389c766fcdcf8c776febaa3ff85ec39: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:52:57.776388 containerd[1503]: time="2025-09-10T04:52:57.776322978Z" level=info msg="CreateContainer within sandbox \"9b0fd4fc47e782cbbfd8a9a64953e02f3d165fcb49a5e29e4dcf54f32db2f6ed\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c34b1e666a2572594561184f8f494bc4b389c766fcdcf8c776febaa3ff85ec39\"" Sep 10 04:52:57.777186 containerd[1503]: time="2025-09-10T04:52:57.776872910Z" level=info msg="StartContainer for \"c34b1e666a2572594561184f8f494bc4b389c766fcdcf8c776febaa3ff85ec39\"" Sep 10 04:52:57.778533 containerd[1503]: time="2025-09-10T04:52:57.778498184Z" level=info msg="connecting to shim c34b1e666a2572594561184f8f494bc4b389c766fcdcf8c776febaa3ff85ec39" address="unix:///run/containerd/s/54dc299561ebe177088480a358faeefc0e96623a8f7cc850e23bd73bcdad51e9" protocol=ttrpc version=3 Sep 10 04:52:57.818196 systemd[1]: Started cri-containerd-c34b1e666a2572594561184f8f494bc4b389c766fcdcf8c776febaa3ff85ec39.scope - libcontainer container c34b1e666a2572594561184f8f494bc4b389c766fcdcf8c776febaa3ff85ec39. Sep 10 04:52:57.853268 containerd[1503]: time="2025-09-10T04:52:57.853222629Z" level=info msg="StartContainer for \"c34b1e666a2572594561184f8f494bc4b389c766fcdcf8c776febaa3ff85ec39\" returns successfully" Sep 10 04:52:57.975957 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 04:52:57.976065 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 04:52:58.210212 kubelet[2663]: I0910 04:52:58.210163 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f6480a61-3bae-400b-9de6-f3b9a00f8b32-whisker-backend-key-pair\") pod \"f6480a61-3bae-400b-9de6-f3b9a00f8b32\" (UID: \"f6480a61-3bae-400b-9de6-f3b9a00f8b32\") " Sep 10 04:52:58.210212 kubelet[2663]: I0910 04:52:58.210214 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24vlc\" (UniqueName: \"kubernetes.io/projected/f6480a61-3bae-400b-9de6-f3b9a00f8b32-kube-api-access-24vlc\") pod \"f6480a61-3bae-400b-9de6-f3b9a00f8b32\" (UID: \"f6480a61-3bae-400b-9de6-f3b9a00f8b32\") " Sep 10 04:52:58.210625 kubelet[2663]: I0910 04:52:58.210243 2663 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6480a61-3bae-400b-9de6-f3b9a00f8b32-whisker-ca-bundle\") pod \"f6480a61-3bae-400b-9de6-f3b9a00f8b32\" (UID: \"f6480a61-3bae-400b-9de6-f3b9a00f8b32\") " Sep 10 04:52:58.222564 kubelet[2663]: I0910 04:52:58.222517 2663 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6480a61-3bae-400b-9de6-f3b9a00f8b32-kube-api-access-24vlc" (OuterVolumeSpecName: "kube-api-access-24vlc") pod "f6480a61-3bae-400b-9de6-f3b9a00f8b32" (UID: "f6480a61-3bae-400b-9de6-f3b9a00f8b32"). InnerVolumeSpecName "kube-api-access-24vlc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 04:52:58.223348 kubelet[2663]: I0910 04:52:58.223295 2663 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6480a61-3bae-400b-9de6-f3b9a00f8b32-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f6480a61-3bae-400b-9de6-f3b9a00f8b32" (UID: "f6480a61-3bae-400b-9de6-f3b9a00f8b32"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 04:52:58.234889 kubelet[2663]: I0910 04:52:58.234850 2663 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6480a61-3bae-400b-9de6-f3b9a00f8b32-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f6480a61-3bae-400b-9de6-f3b9a00f8b32" (UID: "f6480a61-3bae-400b-9de6-f3b9a00f8b32"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 04:52:58.312203 kubelet[2663]: I0910 04:52:58.312153 2663 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f6480a61-3bae-400b-9de6-f3b9a00f8b32-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 04:52:58.312203 kubelet[2663]: I0910 04:52:58.312188 2663 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-24vlc\" (UniqueName: \"kubernetes.io/projected/f6480a61-3bae-400b-9de6-f3b9a00f8b32-kube-api-access-24vlc\") on node \"localhost\" DevicePath \"\"" Sep 10 04:52:58.312203 kubelet[2663]: I0910 04:52:58.312196 2663 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f6480a61-3bae-400b-9de6-f3b9a00f8b32-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 04:52:58.460141 systemd[1]: var-lib-kubelet-pods-f6480a61\x2d3bae\x2d400b\x2d9de6\x2df3b9a00f8b32-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d24vlc.mount: Deactivated successfully. Sep 10 04:52:58.460240 systemd[1]: var-lib-kubelet-pods-f6480a61\x2d3bae\x2d400b\x2d9de6\x2df3b9a00f8b32-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 04:52:58.576887 systemd[1]: Removed slice kubepods-besteffort-podf6480a61_3bae_400b_9de6_f3b9a00f8b32.slice - libcontainer container kubepods-besteffort-podf6480a61_3bae_400b_9de6_f3b9a00f8b32.slice. Sep 10 04:52:58.701548 kubelet[2663]: I0910 04:52:58.701427 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-z7mdk" podStartSLOduration=1.745288558 podStartE2EDuration="13.701410571s" podCreationTimestamp="2025-09-10 04:52:45 +0000 UTC" firstStartedPulling="2025-09-10 04:52:45.779547434 +0000 UTC m=+21.299226467" lastFinishedPulling="2025-09-10 04:52:57.735669487 +0000 UTC m=+33.255348480" observedRunningTime="2025-09-10 04:52:58.701170086 +0000 UTC m=+34.220849119" watchObservedRunningTime="2025-09-10 04:52:58.701410571 +0000 UTC m=+34.221089604" Sep 10 04:52:58.709817 kubelet[2663]: I0910 04:52:58.709752 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:52:58.770787 systemd[1]: Created slice kubepods-besteffort-podc1c34b2e_1ef3_4949_a411_21a2cad29d2d.slice - libcontainer container kubepods-besteffort-podc1c34b2e_1ef3_4949_a411_21a2cad29d2d.slice. Sep 10 04:52:58.820043 kubelet[2663]: I0910 04:52:58.819999 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c1c34b2e-1ef3-4949-a411-21a2cad29d2d-whisker-backend-key-pair\") pod \"whisker-7b94f9b555-cndsx\" (UID: \"c1c34b2e-1ef3-4949-a411-21a2cad29d2d\") " pod="calico-system/whisker-7b94f9b555-cndsx" Sep 10 04:52:58.820661 kubelet[2663]: I0910 04:52:58.820196 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g55pl\" (UniqueName: \"kubernetes.io/projected/c1c34b2e-1ef3-4949-a411-21a2cad29d2d-kube-api-access-g55pl\") pod \"whisker-7b94f9b555-cndsx\" (UID: \"c1c34b2e-1ef3-4949-a411-21a2cad29d2d\") " pod="calico-system/whisker-7b94f9b555-cndsx" Sep 10 04:52:58.820661 kubelet[2663]: I0910 04:52:58.820290 2663 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c1c34b2e-1ef3-4949-a411-21a2cad29d2d-whisker-ca-bundle\") pod \"whisker-7b94f9b555-cndsx\" (UID: \"c1c34b2e-1ef3-4949-a411-21a2cad29d2d\") " pod="calico-system/whisker-7b94f9b555-cndsx" Sep 10 04:52:59.076503 containerd[1503]: time="2025-09-10T04:52:59.076189782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b94f9b555-cndsx,Uid:c1c34b2e-1ef3-4949-a411-21a2cad29d2d,Namespace:calico-system,Attempt:0,}" Sep 10 04:52:59.227011 systemd-networkd[1432]: califb043132784: Link UP Sep 10 04:52:59.227538 systemd-networkd[1432]: califb043132784: Gained carrier Sep 10 04:52:59.239879 containerd[1503]: 2025-09-10 04:52:59.098 [INFO][3817] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 04:52:59.239879 containerd[1503]: 2025-09-10 04:52:59.125 [INFO][3817] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7b94f9b555--cndsx-eth0 whisker-7b94f9b555- calico-system c1c34b2e-1ef3-4949-a411-21a2cad29d2d 904 0 2025-09-10 04:52:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b94f9b555 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7b94f9b555-cndsx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califb043132784 [] [] }} ContainerID="5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" Namespace="calico-system" Pod="whisker-7b94f9b555-cndsx" WorkloadEndpoint="localhost-k8s-whisker--7b94f9b555--cndsx-" Sep 10 04:52:59.239879 containerd[1503]: 2025-09-10 04:52:59.126 [INFO][3817] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" Namespace="calico-system" Pod="whisker-7b94f9b555-cndsx" WorkloadEndpoint="localhost-k8s-whisker--7b94f9b555--cndsx-eth0" Sep 10 04:52:59.239879 containerd[1503]: 2025-09-10 04:52:59.185 [INFO][3832] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" HandleID="k8s-pod-network.5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" Workload="localhost-k8s-whisker--7b94f9b555--cndsx-eth0" Sep 10 04:52:59.240248 containerd[1503]: 2025-09-10 04:52:59.185 [INFO][3832] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" HandleID="k8s-pod-network.5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" Workload="localhost-k8s-whisker--7b94f9b555--cndsx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000519150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7b94f9b555-cndsx", "timestamp":"2025-09-10 04:52:59.185357596 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:52:59.240248 containerd[1503]: 2025-09-10 04:52:59.185 [INFO][3832] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:52:59.240248 containerd[1503]: 2025-09-10 04:52:59.185 [INFO][3832] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:52:59.240248 containerd[1503]: 2025-09-10 04:52:59.185 [INFO][3832] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:52:59.240248 containerd[1503]: 2025-09-10 04:52:59.195 [INFO][3832] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" host="localhost" Sep 10 04:52:59.240248 containerd[1503]: 2025-09-10 04:52:59.200 [INFO][3832] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:52:59.240248 containerd[1503]: 2025-09-10 04:52:59.204 [INFO][3832] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:52:59.240248 containerd[1503]: 2025-09-10 04:52:59.205 [INFO][3832] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:52:59.240248 containerd[1503]: 2025-09-10 04:52:59.207 [INFO][3832] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:52:59.240248 containerd[1503]: 2025-09-10 04:52:59.207 [INFO][3832] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" host="localhost" Sep 10 04:52:59.240547 containerd[1503]: 2025-09-10 04:52:59.209 [INFO][3832] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4 Sep 10 04:52:59.240547 containerd[1503]: 2025-09-10 04:52:59.213 [INFO][3832] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" host="localhost" Sep 10 04:52:59.240547 containerd[1503]: 2025-09-10 04:52:59.218 [INFO][3832] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" host="localhost" Sep 10 04:52:59.240547 containerd[1503]: 2025-09-10 04:52:59.218 [INFO][3832] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" host="localhost" Sep 10 04:52:59.240547 containerd[1503]: 2025-09-10 04:52:59.218 [INFO][3832] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:52:59.240547 containerd[1503]: 2025-09-10 04:52:59.218 [INFO][3832] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" HandleID="k8s-pod-network.5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" Workload="localhost-k8s-whisker--7b94f9b555--cndsx-eth0" Sep 10 04:52:59.240732 containerd[1503]: 2025-09-10 04:52:59.221 [INFO][3817] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" Namespace="calico-system" Pod="whisker-7b94f9b555-cndsx" WorkloadEndpoint="localhost-k8s-whisker--7b94f9b555--cndsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b94f9b555--cndsx-eth0", GenerateName:"whisker-7b94f9b555-", Namespace:"calico-system", SelfLink:"", UID:"c1c34b2e-1ef3-4949-a411-21a2cad29d2d", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b94f9b555", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7b94f9b555-cndsx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califb043132784", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:52:59.240732 containerd[1503]: 2025-09-10 04:52:59.221 [INFO][3817] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" Namespace="calico-system" Pod="whisker-7b94f9b555-cndsx" WorkloadEndpoint="localhost-k8s-whisker--7b94f9b555--cndsx-eth0" Sep 10 04:52:59.240834 containerd[1503]: 2025-09-10 04:52:59.221 [INFO][3817] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb043132784 ContainerID="5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" Namespace="calico-system" Pod="whisker-7b94f9b555-cndsx" WorkloadEndpoint="localhost-k8s-whisker--7b94f9b555--cndsx-eth0" Sep 10 04:52:59.240834 containerd[1503]: 2025-09-10 04:52:59.227 [INFO][3817] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" Namespace="calico-system" Pod="whisker-7b94f9b555-cndsx" WorkloadEndpoint="localhost-k8s-whisker--7b94f9b555--cndsx-eth0" Sep 10 04:52:59.240908 containerd[1503]: 2025-09-10 04:52:59.228 [INFO][3817] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" Namespace="calico-system" Pod="whisker-7b94f9b555-cndsx" WorkloadEndpoint="localhost-k8s-whisker--7b94f9b555--cndsx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b94f9b555--cndsx-eth0", GenerateName:"whisker-7b94f9b555-", Namespace:"calico-system", SelfLink:"", UID:"c1c34b2e-1ef3-4949-a411-21a2cad29d2d", ResourceVersion:"904", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b94f9b555", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4", Pod:"whisker-7b94f9b555-cndsx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califb043132784", MAC:"ba:a0:64:14:2a:d2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:52:59.241014 containerd[1503]: 2025-09-10 04:52:59.237 [INFO][3817] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" Namespace="calico-system" Pod="whisker-7b94f9b555-cndsx" WorkloadEndpoint="localhost-k8s-whisker--7b94f9b555--cndsx-eth0" Sep 10 04:52:59.295728 containerd[1503]: time="2025-09-10T04:52:59.295661432Z" level=info msg="connecting to shim 5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4" address="unix:///run/containerd/s/ccbc6b379c40ea7ee7cc51eabd7b58ebbaba695ac9561da775bcdea2d20137fd" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:52:59.321018 systemd[1]: Started cri-containerd-5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4.scope - libcontainer container 5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4. Sep 10 04:52:59.364995 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:52:59.410689 containerd[1503]: time="2025-09-10T04:52:59.410648120Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b94f9b555-cndsx,Uid:c1c34b2e-1ef3-4949-a411-21a2cad29d2d,Namespace:calico-system,Attempt:0,} returns sandbox id \"5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4\"" Sep 10 04:52:59.426034 containerd[1503]: time="2025-09-10T04:52:59.425996300Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 04:52:59.689096 kubelet[2663]: I0910 04:52:59.688869 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:52:59.707870 systemd-networkd[1432]: vxlan.calico: Link UP Sep 10 04:52:59.707878 systemd-networkd[1432]: vxlan.calico: Gained carrier Sep 10 04:53:00.496920 containerd[1503]: time="2025-09-10T04:53:00.496877840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:00.501186 containerd[1503]: time="2025-09-10T04:53:00.497617934Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 04:53:00.501247 containerd[1503]: time="2025-09-10T04:53:00.498485030Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:00.501280 containerd[1503]: time="2025-09-10T04:53:00.501015798Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.074490288s" Sep 10 04:53:00.501311 containerd[1503]: time="2025-09-10T04:53:00.501279683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 04:53:00.501882 containerd[1503]: time="2025-09-10T04:53:00.501861414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:00.506245 containerd[1503]: time="2025-09-10T04:53:00.506219056Z" level=info msg="CreateContainer within sandbox \"5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 04:53:00.512533 containerd[1503]: time="2025-09-10T04:53:00.511778321Z" level=info msg="Container ed566973dfbe1c1e67ff6c6fbc99902f4aace4710bed33bb7691f3507e6b0b45: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:53:00.524131 containerd[1503]: time="2025-09-10T04:53:00.524099035Z" level=info msg="CreateContainer within sandbox \"5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ed566973dfbe1c1e67ff6c6fbc99902f4aace4710bed33bb7691f3507e6b0b45\"" Sep 10 04:53:00.539273 containerd[1503]: time="2025-09-10T04:53:00.539229681Z" level=info msg="StartContainer for \"ed566973dfbe1c1e67ff6c6fbc99902f4aace4710bed33bb7691f3507e6b0b45\"" Sep 10 04:53:00.540301 containerd[1503]: time="2025-09-10T04:53:00.540276340Z" level=info msg="connecting to shim ed566973dfbe1c1e67ff6c6fbc99902f4aace4710bed33bb7691f3507e6b0b45" address="unix:///run/containerd/s/ccbc6b379c40ea7ee7cc51eabd7b58ebbaba695ac9561da775bcdea2d20137fd" protocol=ttrpc version=3 Sep 10 04:53:00.567119 systemd[1]: Started cri-containerd-ed566973dfbe1c1e67ff6c6fbc99902f4aace4710bed33bb7691f3507e6b0b45.scope - libcontainer container ed566973dfbe1c1e67ff6c6fbc99902f4aace4710bed33bb7691f3507e6b0b45. Sep 10 04:53:00.571953 kubelet[2663]: I0910 04:53:00.571902 2663 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6480a61-3bae-400b-9de6-f3b9a00f8b32" path="/var/lib/kubelet/pods/f6480a61-3bae-400b-9de6-f3b9a00f8b32/volumes" Sep 10 04:53:00.600787 containerd[1503]: time="2025-09-10T04:53:00.600753484Z" level=info msg="StartContainer for \"ed566973dfbe1c1e67ff6c6fbc99902f4aace4710bed33bb7691f3507e6b0b45\" returns successfully" Sep 10 04:53:00.601981 containerd[1503]: time="2025-09-10T04:53:00.601955947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 04:53:01.251068 systemd-networkd[1432]: califb043132784: Gained IPv6LL Sep 10 04:53:01.443072 systemd-networkd[1432]: vxlan.calico: Gained IPv6LL Sep 10 04:53:02.455338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount218158156.mount: Deactivated successfully. Sep 10 04:53:02.471071 containerd[1503]: time="2025-09-10T04:53:02.471027519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:02.471596 containerd[1503]: time="2025-09-10T04:53:02.471566169Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 04:53:02.472449 containerd[1503]: time="2025-09-10T04:53:02.472421144Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:02.474482 containerd[1503]: time="2025-09-10T04:53:02.474455380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:02.475064 containerd[1503]: time="2025-09-10T04:53:02.475035190Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.873048883s" Sep 10 04:53:02.475104 containerd[1503]: time="2025-09-10T04:53:02.475069071Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 04:53:02.479883 containerd[1503]: time="2025-09-10T04:53:02.479793795Z" level=info msg="CreateContainer within sandbox \"5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 04:53:02.488892 containerd[1503]: time="2025-09-10T04:53:02.488178104Z" level=info msg="Container 77aeca94657b3169ccb5704f8a7fe2a6fa501cced00d364489990cd701e2560b: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:53:02.495676 containerd[1503]: time="2025-09-10T04:53:02.495636476Z" level=info msg="CreateContainer within sandbox \"5859a3aa7f7a0acbd91e5613d092f050edb75352dfd02ed65e09344c924fd3c4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"77aeca94657b3169ccb5704f8a7fe2a6fa501cced00d364489990cd701e2560b\"" Sep 10 04:53:02.496175 containerd[1503]: time="2025-09-10T04:53:02.496150045Z" level=info msg="StartContainer for \"77aeca94657b3169ccb5704f8a7fe2a6fa501cced00d364489990cd701e2560b\"" Sep 10 04:53:02.497423 containerd[1503]: time="2025-09-10T04:53:02.497382547Z" level=info msg="connecting to shim 77aeca94657b3169ccb5704f8a7fe2a6fa501cced00d364489990cd701e2560b" address="unix:///run/containerd/s/ccbc6b379c40ea7ee7cc51eabd7b58ebbaba695ac9561da775bcdea2d20137fd" protocol=ttrpc version=3 Sep 10 04:53:02.526130 systemd[1]: Started cri-containerd-77aeca94657b3169ccb5704f8a7fe2a6fa501cced00d364489990cd701e2560b.scope - libcontainer container 77aeca94657b3169ccb5704f8a7fe2a6fa501cced00d364489990cd701e2560b. Sep 10 04:53:02.599838 containerd[1503]: time="2025-09-10T04:53:02.599803926Z" level=info msg="StartContainer for \"77aeca94657b3169ccb5704f8a7fe2a6fa501cced00d364489990cd701e2560b\" returns successfully" Sep 10 04:53:04.577789 containerd[1503]: time="2025-09-10T04:53:04.577739346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddb8f475b-qmv9d,Uid:e3044a66-3d8c-451c-a79d-03a4a1f14800,Namespace:calico-apiserver,Attempt:0,}" Sep 10 04:53:04.693342 systemd-networkd[1432]: cali36b286d5a80: Link UP Sep 10 04:53:04.693646 systemd-networkd[1432]: cali36b286d5a80: Gained carrier Sep 10 04:53:04.708598 kubelet[2663]: I0910 04:53:04.708406 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7b94f9b555-cndsx" podStartSLOduration=3.654358266 podStartE2EDuration="6.708337453s" podCreationTimestamp="2025-09-10 04:52:58 +0000 UTC" firstStartedPulling="2025-09-10 04:52:59.421677095 +0000 UTC m=+34.941356128" lastFinishedPulling="2025-09-10 04:53:02.475656282 +0000 UTC m=+37.995335315" observedRunningTime="2025-09-10 04:53:02.724076653 +0000 UTC m=+38.243755686" watchObservedRunningTime="2025-09-10 04:53:04.708337453 +0000 UTC m=+40.228016486" Sep 10 04:53:04.710682 containerd[1503]: 2025-09-10 04:53:04.629 [INFO][4177] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0 calico-apiserver-5ddb8f475b- calico-apiserver e3044a66-3d8c-451c-a79d-03a4a1f14800 839 0 2025-09-10 04:52:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddb8f475b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5ddb8f475b-qmv9d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali36b286d5a80 [] [] }} ContainerID="2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-qmv9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-" Sep 10 04:53:04.710682 containerd[1503]: 2025-09-10 04:53:04.629 [INFO][4177] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-qmv9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0" Sep 10 04:53:04.710682 containerd[1503]: 2025-09-10 04:53:04.651 [INFO][4191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" HandleID="k8s-pod-network.2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" Workload="localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0" Sep 10 04:53:04.711263 containerd[1503]: 2025-09-10 04:53:04.651 [INFO][4191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" HandleID="k8s-pod-network.2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" Workload="localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd670), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5ddb8f475b-qmv9d", "timestamp":"2025-09-10 04:53:04.651250737 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:53:04.711263 containerd[1503]: 2025-09-10 04:53:04.651 [INFO][4191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:53:04.711263 containerd[1503]: 2025-09-10 04:53:04.651 [INFO][4191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:53:04.711263 containerd[1503]: 2025-09-10 04:53:04.651 [INFO][4191] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:53:04.711263 containerd[1503]: 2025-09-10 04:53:04.662 [INFO][4191] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" host="localhost" Sep 10 04:53:04.711263 containerd[1503]: 2025-09-10 04:53:04.665 [INFO][4191] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:53:04.711263 containerd[1503]: 2025-09-10 04:53:04.672 [INFO][4191] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:53:04.711263 containerd[1503]: 2025-09-10 04:53:04.674 [INFO][4191] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:04.711263 containerd[1503]: 2025-09-10 04:53:04.676 [INFO][4191] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:04.711263 containerd[1503]: 2025-09-10 04:53:04.676 [INFO][4191] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" host="localhost" Sep 10 04:53:04.711834 containerd[1503]: 2025-09-10 04:53:04.677 [INFO][4191] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e Sep 10 04:53:04.711834 containerd[1503]: 2025-09-10 04:53:04.681 [INFO][4191] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" host="localhost" Sep 10 04:53:04.711834 containerd[1503]: 2025-09-10 04:53:04.687 [INFO][4191] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" host="localhost" Sep 10 04:53:04.711834 containerd[1503]: 2025-09-10 04:53:04.687 [INFO][4191] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" host="localhost" Sep 10 04:53:04.711834 containerd[1503]: 2025-09-10 04:53:04.687 [INFO][4191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:53:04.711834 containerd[1503]: 2025-09-10 04:53:04.687 [INFO][4191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" HandleID="k8s-pod-network.2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" Workload="localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0" Sep 10 04:53:04.712412 containerd[1503]: 2025-09-10 04:53:04.691 [INFO][4177] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-qmv9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0", GenerateName:"calico-apiserver-5ddb8f475b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3044a66-3d8c-451c-a79d-03a4a1f14800", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddb8f475b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5ddb8f475b-qmv9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36b286d5a80", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:04.712497 containerd[1503]: 2025-09-10 04:53:04.691 [INFO][4177] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-qmv9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0" Sep 10 04:53:04.712497 containerd[1503]: 2025-09-10 04:53:04.691 [INFO][4177] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36b286d5a80 ContainerID="2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-qmv9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0" Sep 10 04:53:04.712497 containerd[1503]: 2025-09-10 04:53:04.694 [INFO][4177] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-qmv9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0" Sep 10 04:53:04.712603 containerd[1503]: 2025-09-10 04:53:04.695 [INFO][4177] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-qmv9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0", GenerateName:"calico-apiserver-5ddb8f475b-", Namespace:"calico-apiserver", SelfLink:"", UID:"e3044a66-3d8c-451c-a79d-03a4a1f14800", ResourceVersion:"839", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddb8f475b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e", Pod:"calico-apiserver-5ddb8f475b-qmv9d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali36b286d5a80", MAC:"ee:60:80:94:db:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:04.712656 containerd[1503]: 2025-09-10 04:53:04.708 [INFO][4177] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-qmv9d" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--qmv9d-eth0" Sep 10 04:53:04.734331 containerd[1503]: time="2025-09-10T04:53:04.734285608Z" level=info msg="connecting to shim 2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e" address="unix:///run/containerd/s/69335f9e0cdebbd5171c756a04f4ecce12dd92fb22fe7da4b0a4dda7fc60ca66" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:53:04.757066 systemd[1]: Started cri-containerd-2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e.scope - libcontainer container 2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e. Sep 10 04:53:04.768739 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:53:04.787881 containerd[1503]: time="2025-09-10T04:53:04.787779824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddb8f475b-qmv9d,Uid:e3044a66-3d8c-451c-a79d-03a4a1f14800,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e\"" Sep 10 04:53:04.789518 containerd[1503]: time="2025-09-10T04:53:04.789494732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 04:53:04.946116 systemd[1]: Started sshd@7-10.0.0.60:22-10.0.0.1:49754.service - OpenSSH per-connection server daemon (10.0.0.1:49754). Sep 10 04:53:05.008710 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 49754 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:05.009267 sshd-session[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:05.014012 systemd-logind[1476]: New session 8 of user core. Sep 10 04:53:05.022095 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 04:53:05.223762 sshd[4270]: Connection closed by 10.0.0.1 port 49754 Sep 10 04:53:05.224195 sshd-session[4267]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:05.227753 systemd[1]: sshd@7-10.0.0.60:22-10.0.0.1:49754.service: Deactivated successfully. Sep 10 04:53:05.230470 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 04:53:05.231474 systemd-logind[1476]: Session 8 logged out. Waiting for processes to exit. Sep 10 04:53:05.232462 systemd-logind[1476]: Removed session 8. Sep 10 04:53:05.569356 containerd[1503]: time="2025-09-10T04:53:05.569248649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vhhp5,Uid:5133c5ff-97e8-40b7-b660-6b80eb06137d,Namespace:kube-system,Attempt:0,}" Sep 10 04:53:05.569356 containerd[1503]: time="2025-09-10T04:53:05.569295250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f747d4549-pwpt8,Uid:ee94e84a-7be1-485c-b93b-ebd59544d3c9,Namespace:calico-system,Attempt:0,}" Sep 10 04:53:05.569745 containerd[1503]: time="2025-09-10T04:53:05.569482293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fhkl2,Uid:0f6048bb-41e4-4c15-be39-3709646ca95d,Namespace:calico-system,Attempt:0,}" Sep 10 04:53:05.569745 containerd[1503]: time="2025-09-10T04:53:05.569541734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c92kd,Uid:fabeda3a-c65d-479a-a08f-68ff0fb2fbf3,Namespace:kube-system,Attempt:0,}" Sep 10 04:53:05.569994 containerd[1503]: time="2025-09-10T04:53:05.569970981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddb8f475b-4t2gd,Uid:1fb5c643-f892-4c08-9108-c5fec131485f,Namespace:calico-apiserver,Attempt:0,}" Sep 10 04:53:05.737961 systemd-networkd[1432]: calidb33aa77f4a: Link UP Sep 10 04:53:05.738147 systemd-networkd[1432]: calidb33aa77f4a: Gained carrier Sep 10 04:53:05.757106 containerd[1503]: 2025-09-10 04:53:05.617 [INFO][4283] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0 calico-kube-controllers-7f747d4549- calico-system ee94e84a-7be1-485c-b93b-ebd59544d3c9 837 0 2025-09-10 04:52:45 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f747d4549 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7f747d4549-pwpt8 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidb33aa77f4a [] [] }} ContainerID="b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" Namespace="calico-system" Pod="calico-kube-controllers-7f747d4549-pwpt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-" Sep 10 04:53:05.757106 containerd[1503]: 2025-09-10 04:53:05.619 [INFO][4283] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" Namespace="calico-system" Pod="calico-kube-controllers-7f747d4549-pwpt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0" Sep 10 04:53:05.757106 containerd[1503]: 2025-09-10 04:53:05.673 [INFO][4352] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" HandleID="k8s-pod-network.b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" Workload="localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0" Sep 10 04:53:05.757759 containerd[1503]: 2025-09-10 04:53:05.673 [INFO][4352] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" HandleID="k8s-pod-network.b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" Workload="localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000435b60), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7f747d4549-pwpt8", "timestamp":"2025-09-10 04:53:05.673563508 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:53:05.757759 containerd[1503]: 2025-09-10 04:53:05.674 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:53:05.757759 containerd[1503]: 2025-09-10 04:53:05.674 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:53:05.757759 containerd[1503]: 2025-09-10 04:53:05.674 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:53:05.757759 containerd[1503]: 2025-09-10 04:53:05.684 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" host="localhost" Sep 10 04:53:05.757759 containerd[1503]: 2025-09-10 04:53:05.691 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:53:05.757759 containerd[1503]: 2025-09-10 04:53:05.701 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:53:05.757759 containerd[1503]: 2025-09-10 04:53:05.706 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:05.757759 containerd[1503]: 2025-09-10 04:53:05.709 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:05.757759 containerd[1503]: 2025-09-10 04:53:05.709 [INFO][4352] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" host="localhost" Sep 10 04:53:05.758064 containerd[1503]: 2025-09-10 04:53:05.713 [INFO][4352] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46 Sep 10 04:53:05.758064 containerd[1503]: 2025-09-10 04:53:05.722 [INFO][4352] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" host="localhost" Sep 10 04:53:05.758064 containerd[1503]: 2025-09-10 04:53:05.729 [INFO][4352] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" host="localhost" Sep 10 04:53:05.758064 containerd[1503]: 2025-09-10 04:53:05.729 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" host="localhost" Sep 10 04:53:05.758064 containerd[1503]: 2025-09-10 04:53:05.729 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:53:05.758064 containerd[1503]: 2025-09-10 04:53:05.729 [INFO][4352] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" HandleID="k8s-pod-network.b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" Workload="localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0" Sep 10 04:53:05.758252 containerd[1503]: 2025-09-10 04:53:05.734 [INFO][4283] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" Namespace="calico-system" Pod="calico-kube-controllers-7f747d4549-pwpt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0", GenerateName:"calico-kube-controllers-7f747d4549-", Namespace:"calico-system", SelfLink:"", UID:"ee94e84a-7be1-485c-b93b-ebd59544d3c9", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f747d4549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7f747d4549-pwpt8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb33aa77f4a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:05.758329 containerd[1503]: 2025-09-10 04:53:05.734 [INFO][4283] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" Namespace="calico-system" Pod="calico-kube-controllers-7f747d4549-pwpt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0" Sep 10 04:53:05.758329 containerd[1503]: 2025-09-10 04:53:05.734 [INFO][4283] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidb33aa77f4a ContainerID="b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" Namespace="calico-system" Pod="calico-kube-controllers-7f747d4549-pwpt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0" Sep 10 04:53:05.758329 containerd[1503]: 2025-09-10 04:53:05.737 [INFO][4283] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" Namespace="calico-system" Pod="calico-kube-controllers-7f747d4549-pwpt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0" Sep 10 04:53:05.758711 containerd[1503]: 2025-09-10 04:53:05.739 [INFO][4283] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" Namespace="calico-system" Pod="calico-kube-controllers-7f747d4549-pwpt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0", GenerateName:"calico-kube-controllers-7f747d4549-", Namespace:"calico-system", SelfLink:"", UID:"ee94e84a-7be1-485c-b93b-ebd59544d3c9", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f747d4549", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46", Pod:"calico-kube-controllers-7f747d4549-pwpt8", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidb33aa77f4a", MAC:"16:b5:d9:c1:ce:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:05.758774 containerd[1503]: 2025-09-10 04:53:05.750 [INFO][4283] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" Namespace="calico-system" Pod="calico-kube-controllers-7f747d4549-pwpt8" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7f747d4549--pwpt8-eth0" Sep 10 04:53:05.809878 containerd[1503]: time="2025-09-10T04:53:05.809735606Z" level=info msg="connecting to shim b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46" address="unix:///run/containerd/s/f803e49e3ecacc9948d45d1c44181cd21a9c1d73758961d1f7a85a2c49354c85" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:53:05.841121 systemd-networkd[1432]: cali679d3cf4676: Link UP Sep 10 04:53:05.841659 systemd-networkd[1432]: cali679d3cf4676: Gained carrier Sep 10 04:53:05.855271 systemd[1]: Started cri-containerd-b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46.scope - libcontainer container b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46. Sep 10 04:53:05.859609 containerd[1503]: 2025-09-10 04:53:05.647 [INFO][4321] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0 calico-apiserver-5ddb8f475b- calico-apiserver 1fb5c643-f892-4c08-9108-c5fec131485f 831 0 2025-09-10 04:52:40 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5ddb8f475b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-5ddb8f475b-4t2gd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali679d3cf4676 [] [] }} ContainerID="d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-4t2gd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-" Sep 10 04:53:05.859609 containerd[1503]: 2025-09-10 04:53:05.648 [INFO][4321] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-4t2gd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0" Sep 10 04:53:05.859609 containerd[1503]: 2025-09-10 04:53:05.696 [INFO][4364] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" HandleID="k8s-pod-network.d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" Workload="localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0" Sep 10 04:53:05.859786 containerd[1503]: 2025-09-10 04:53:05.697 [INFO][4364] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" HandleID="k8s-pod-network.d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" Workload="localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024a040), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-5ddb8f475b-4t2gd", "timestamp":"2025-09-10 04:53:05.696715765 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:53:05.859786 containerd[1503]: 2025-09-10 04:53:05.697 [INFO][4364] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:53:05.859786 containerd[1503]: 2025-09-10 04:53:05.729 [INFO][4364] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:53:05.859786 containerd[1503]: 2025-09-10 04:53:05.729 [INFO][4364] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:53:05.859786 containerd[1503]: 2025-09-10 04:53:05.786 [INFO][4364] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" host="localhost" Sep 10 04:53:05.859786 containerd[1503]: 2025-09-10 04:53:05.794 [INFO][4364] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:53:05.859786 containerd[1503]: 2025-09-10 04:53:05.804 [INFO][4364] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:53:05.859786 containerd[1503]: 2025-09-10 04:53:05.809 [INFO][4364] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:05.859786 containerd[1503]: 2025-09-10 04:53:05.814 [INFO][4364] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:05.859786 containerd[1503]: 2025-09-10 04:53:05.814 [INFO][4364] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" host="localhost" Sep 10 04:53:05.860953 containerd[1503]: 2025-09-10 04:53:05.816 [INFO][4364] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260 Sep 10 04:53:05.860953 containerd[1503]: 2025-09-10 04:53:05.819 [INFO][4364] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" host="localhost" Sep 10 04:53:05.860953 containerd[1503]: 2025-09-10 04:53:05.828 [INFO][4364] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" host="localhost" Sep 10 04:53:05.860953 containerd[1503]: 2025-09-10 04:53:05.828 [INFO][4364] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" host="localhost" Sep 10 04:53:05.860953 containerd[1503]: 2025-09-10 04:53:05.828 [INFO][4364] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:53:05.860953 containerd[1503]: 2025-09-10 04:53:05.828 [INFO][4364] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" HandleID="k8s-pod-network.d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" Workload="localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0" Sep 10 04:53:05.861080 containerd[1503]: 2025-09-10 04:53:05.835 [INFO][4321] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-4t2gd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0", GenerateName:"calico-apiserver-5ddb8f475b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fb5c643-f892-4c08-9108-c5fec131485f", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddb8f475b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-5ddb8f475b-4t2gd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali679d3cf4676", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:05.861134 containerd[1503]: 2025-09-10 04:53:05.835 [INFO][4321] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-4t2gd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0" Sep 10 04:53:05.861134 containerd[1503]: 2025-09-10 04:53:05.835 [INFO][4321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali679d3cf4676 ContainerID="d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-4t2gd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0" Sep 10 04:53:05.861134 containerd[1503]: 2025-09-10 04:53:05.841 [INFO][4321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-4t2gd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0" Sep 10 04:53:05.861198 containerd[1503]: 2025-09-10 04:53:05.843 [INFO][4321] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-4t2gd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0", GenerateName:"calico-apiserver-5ddb8f475b-", Namespace:"calico-apiserver", SelfLink:"", UID:"1fb5c643-f892-4c08-9108-c5fec131485f", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5ddb8f475b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260", Pod:"calico-apiserver-5ddb8f475b-4t2gd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali679d3cf4676", MAC:"3a:81:41:41:35:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:05.861244 containerd[1503]: 2025-09-10 04:53:05.856 [INFO][4321] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" Namespace="calico-apiserver" Pod="calico-apiserver-5ddb8f475b-4t2gd" WorkloadEndpoint="localhost-k8s-calico--apiserver--5ddb8f475b--4t2gd-eth0" Sep 10 04:53:05.868221 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:53:05.880487 containerd[1503]: time="2025-09-10T04:53:05.880429997Z" level=info msg="connecting to shim d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260" address="unix:///run/containerd/s/07b340013b800d825dd273d633d5769864eb42b46d766f66adbf05c71bc6b10f" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:53:05.898169 containerd[1503]: time="2025-09-10T04:53:05.898128685Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f747d4549-pwpt8,Uid:ee94e84a-7be1-485c-b93b-ebd59544d3c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46\"" Sep 10 04:53:05.930111 systemd-networkd[1432]: calibd18a897e64: Link UP Sep 10 04:53:05.930285 systemd-networkd[1432]: calibd18a897e64: Gained carrier Sep 10 04:53:05.948598 containerd[1503]: 2025-09-10 04:53:05.649 [INFO][4310] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--fhkl2-eth0 goldmane-54d579b49d- calico-system 0f6048bb-41e4-4c15-be39-3709646ca95d 838 0 2025-09-10 04:52:45 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-fhkl2 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calibd18a897e64 [] [] }} ContainerID="971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" Namespace="calico-system" Pod="goldmane-54d579b49d-fhkl2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fhkl2-" Sep 10 04:53:05.948598 containerd[1503]: 2025-09-10 04:53:05.649 [INFO][4310] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" Namespace="calico-system" Pod="goldmane-54d579b49d-fhkl2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fhkl2-eth0" Sep 10 04:53:05.948598 containerd[1503]: 2025-09-10 04:53:05.710 [INFO][4366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" HandleID="k8s-pod-network.971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" Workload="localhost-k8s-goldmane--54d579b49d--fhkl2-eth0" Sep 10 04:53:05.948791 containerd[1503]: 2025-09-10 04:53:05.710 [INFO][4366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" HandleID="k8s-pod-network.971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" Workload="localhost-k8s-goldmane--54d579b49d--fhkl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400059ed20), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-fhkl2", "timestamp":"2025-09-10 04:53:05.710077863 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:53:05.948791 containerd[1503]: 2025-09-10 04:53:05.710 [INFO][4366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:53:05.948791 containerd[1503]: 2025-09-10 04:53:05.828 [INFO][4366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:53:05.948791 containerd[1503]: 2025-09-10 04:53:05.828 [INFO][4366] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:53:05.948791 containerd[1503]: 2025-09-10 04:53:05.887 [INFO][4366] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" host="localhost" Sep 10 04:53:05.948791 containerd[1503]: 2025-09-10 04:53:05.896 [INFO][4366] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:53:05.948791 containerd[1503]: 2025-09-10 04:53:05.902 [INFO][4366] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:53:05.948791 containerd[1503]: 2025-09-10 04:53:05.906 [INFO][4366] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:05.948791 containerd[1503]: 2025-09-10 04:53:05.910 [INFO][4366] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:05.948791 containerd[1503]: 2025-09-10 04:53:05.910 [INFO][4366] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" host="localhost" Sep 10 04:53:05.949018 containerd[1503]: 2025-09-10 04:53:05.912 [INFO][4366] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc Sep 10 04:53:05.949018 containerd[1503]: 2025-09-10 04:53:05.916 [INFO][4366] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" host="localhost" Sep 10 04:53:05.949018 containerd[1503]: 2025-09-10 04:53:05.924 [INFO][4366] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" host="localhost" Sep 10 04:53:05.949018 containerd[1503]: 2025-09-10 04:53:05.924 [INFO][4366] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" host="localhost" Sep 10 04:53:05.949018 containerd[1503]: 2025-09-10 04:53:05.924 [INFO][4366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:53:05.949018 containerd[1503]: 2025-09-10 04:53:05.924 [INFO][4366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" HandleID="k8s-pod-network.971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" Workload="localhost-k8s-goldmane--54d579b49d--fhkl2-eth0" Sep 10 04:53:05.949123 containerd[1503]: 2025-09-10 04:53:05.927 [INFO][4310] cni-plugin/k8s.go 418: Populated endpoint ContainerID="971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" Namespace="calico-system" Pod="goldmane-54d579b49d-fhkl2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fhkl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--fhkl2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"0f6048bb-41e4-4c15-be39-3709646ca95d", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-fhkl2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibd18a897e64", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:05.949123 containerd[1503]: 2025-09-10 04:53:05.927 [INFO][4310] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" Namespace="calico-system" Pod="goldmane-54d579b49d-fhkl2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fhkl2-eth0" Sep 10 04:53:05.949192 containerd[1503]: 2025-09-10 04:53:05.927 [INFO][4310] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibd18a897e64 ContainerID="971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" Namespace="calico-system" Pod="goldmane-54d579b49d-fhkl2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fhkl2-eth0" Sep 10 04:53:05.949192 containerd[1503]: 2025-09-10 04:53:05.929 [INFO][4310] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" Namespace="calico-system" Pod="goldmane-54d579b49d-fhkl2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fhkl2-eth0" Sep 10 04:53:05.949230 containerd[1503]: 2025-09-10 04:53:05.929 [INFO][4310] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" Namespace="calico-system" Pod="goldmane-54d579b49d-fhkl2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fhkl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--fhkl2-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"0f6048bb-41e4-4c15-be39-3709646ca95d", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc", Pod:"goldmane-54d579b49d-fhkl2", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calibd18a897e64", MAC:"16:4b:e0:76:c6:c7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:05.949275 containerd[1503]: 2025-09-10 04:53:05.943 [INFO][4310] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" Namespace="calico-system" Pod="goldmane-54d579b49d-fhkl2" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--fhkl2-eth0" Sep 10 04:53:05.955102 systemd[1]: Started cri-containerd-d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260.scope - libcontainer container d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260. Sep 10 04:53:05.970771 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:53:06.019788 kubelet[2663]: I0910 04:53:06.019248 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:53:06.031729 containerd[1503]: time="2025-09-10T04:53:06.031658607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5ddb8f475b-4t2gd,Uid:1fb5c643-f892-4c08-9108-c5fec131485f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260\"" Sep 10 04:53:06.044645 systemd-networkd[1432]: cali1432d66c7b8: Link UP Sep 10 04:53:06.045175 systemd-networkd[1432]: cali1432d66c7b8: Gained carrier Sep 10 04:53:06.058568 containerd[1503]: time="2025-09-10T04:53:06.058109386Z" level=info msg="connecting to shim 971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc" address="unix:///run/containerd/s/37b0d63ea15ad563f534fdbd1f616b3dfb86e53fb48d45f84280f01d5e13abc6" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:53:06.062325 containerd[1503]: 2025-09-10 04:53:05.647 [INFO][4298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0 coredns-674b8bbfcf- kube-system 5133c5ff-97e8-40b7-b660-6b80eb06137d 835 0 2025-09-10 04:52:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-vhhp5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1432d66c7b8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vhhp5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vhhp5-" Sep 10 04:53:06.062325 containerd[1503]: 2025-09-10 04:53:05.647 [INFO][4298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vhhp5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0" Sep 10 04:53:06.062325 containerd[1503]: 2025-09-10 04:53:05.717 [INFO][4362] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" HandleID="k8s-pod-network.3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" Workload="localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0" Sep 10 04:53:06.062566 containerd[1503]: 2025-09-10 04:53:05.717 [INFO][4362] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" HandleID="k8s-pod-network.3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" Workload="localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121450), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-vhhp5", "timestamp":"2025-09-10 04:53:05.7172967 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:53:06.062566 containerd[1503]: 2025-09-10 04:53:05.717 [INFO][4362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:53:06.062566 containerd[1503]: 2025-09-10 04:53:05.924 [INFO][4362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:53:06.062566 containerd[1503]: 2025-09-10 04:53:05.925 [INFO][4362] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:53:06.062566 containerd[1503]: 2025-09-10 04:53:05.990 [INFO][4362] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" host="localhost" Sep 10 04:53:06.062566 containerd[1503]: 2025-09-10 04:53:05.998 [INFO][4362] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:53:06.062566 containerd[1503]: 2025-09-10 04:53:06.010 [INFO][4362] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:53:06.062566 containerd[1503]: 2025-09-10 04:53:06.014 [INFO][4362] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:06.062566 containerd[1503]: 2025-09-10 04:53:06.017 [INFO][4362] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:06.062566 containerd[1503]: 2025-09-10 04:53:06.017 [INFO][4362] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" host="localhost" Sep 10 04:53:06.062762 containerd[1503]: 2025-09-10 04:53:06.018 [INFO][4362] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4 Sep 10 04:53:06.062762 containerd[1503]: 2025-09-10 04:53:06.024 [INFO][4362] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" host="localhost" Sep 10 04:53:06.062762 containerd[1503]: 2025-09-10 04:53:06.036 [INFO][4362] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" host="localhost" Sep 10 04:53:06.062762 containerd[1503]: 2025-09-10 04:53:06.037 [INFO][4362] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" host="localhost" Sep 10 04:53:06.062762 containerd[1503]: 2025-09-10 04:53:06.037 [INFO][4362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:53:06.062762 containerd[1503]: 2025-09-10 04:53:06.037 [INFO][4362] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" HandleID="k8s-pod-network.3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" Workload="localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0" Sep 10 04:53:06.062866 containerd[1503]: 2025-09-10 04:53:06.042 [INFO][4298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vhhp5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5133c5ff-97e8-40b7-b660-6b80eb06137d", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-vhhp5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1432d66c7b8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:06.063192 containerd[1503]: 2025-09-10 04:53:06.042 [INFO][4298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vhhp5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0" Sep 10 04:53:06.063192 containerd[1503]: 2025-09-10 04:53:06.042 [INFO][4298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1432d66c7b8 ContainerID="3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vhhp5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0" Sep 10 04:53:06.063192 containerd[1503]: 2025-09-10 04:53:06.045 [INFO][4298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vhhp5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0" Sep 10 04:53:06.063263 containerd[1503]: 2025-09-10 04:53:06.048 [INFO][4298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vhhp5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"5133c5ff-97e8-40b7-b660-6b80eb06137d", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4", Pod:"coredns-674b8bbfcf-vhhp5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1432d66c7b8", MAC:"c6:b4:d3:eb:1f:87", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:06.063263 containerd[1503]: 2025-09-10 04:53:06.059 [INFO][4298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" Namespace="kube-system" Pod="coredns-674b8bbfcf-vhhp5" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--vhhp5-eth0" Sep 10 04:53:06.089237 systemd[1]: Started cri-containerd-971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc.scope - libcontainer container 971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc. Sep 10 04:53:06.100845 containerd[1503]: time="2025-09-10T04:53:06.100730382Z" level=info msg="connecting to shim 3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4" address="unix:///run/containerd/s/70044edfbea674bcaf0250aff29a492336c02d92c5fb1a9d5e98ff6a6e5e84ec" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:53:06.111099 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:53:06.161733 systemd-networkd[1432]: cali22f89df4e5d: Link UP Sep 10 04:53:06.162401 systemd-networkd[1432]: cali22f89df4e5d: Gained carrier Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:05.664 [INFO][4317] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--c92kd-eth0 coredns-674b8bbfcf- kube-system fabeda3a-c65d-479a-a08f-68ff0fb2fbf3 834 0 2025-09-10 04:52:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-c92kd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali22f89df4e5d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" Namespace="kube-system" Pod="coredns-674b8bbfcf-c92kd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c92kd-" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:05.664 [INFO][4317] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" Namespace="kube-system" Pod="coredns-674b8bbfcf-c92kd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c92kd-eth0" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:05.740 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" HandleID="k8s-pod-network.37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" Workload="localhost-k8s-coredns--674b8bbfcf--c92kd-eth0" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:05.742 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" HandleID="k8s-pod-network.37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" Workload="localhost-k8s-coredns--674b8bbfcf--c92kd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005119c0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-c92kd", "timestamp":"2025-09-10 04:53:05.738391444 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:05.742 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.038 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.038 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.089 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" host="localhost" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.099 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.111 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.120 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.127 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.127 [INFO][4379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" host="localhost" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.129 [INFO][4379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53 Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.140 [INFO][4379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" host="localhost" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.149 [INFO][4379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" host="localhost" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.149 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" host="localhost" Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.150 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:53:06.185861 containerd[1503]: 2025-09-10 04:53:06.150 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" HandleID="k8s-pod-network.37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" Workload="localhost-k8s-coredns--674b8bbfcf--c92kd-eth0" Sep 10 04:53:06.186456 containerd[1503]: 2025-09-10 04:53:06.156 [INFO][4317] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" Namespace="kube-system" Pod="coredns-674b8bbfcf-c92kd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c92kd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--c92kd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fabeda3a-c65d-479a-a08f-68ff0fb2fbf3", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-c92kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22f89df4e5d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:06.186456 containerd[1503]: 2025-09-10 04:53:06.156 [INFO][4317] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" Namespace="kube-system" Pod="coredns-674b8bbfcf-c92kd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c92kd-eth0" Sep 10 04:53:06.186456 containerd[1503]: 2025-09-10 04:53:06.159 [INFO][4317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali22f89df4e5d ContainerID="37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" Namespace="kube-system" Pod="coredns-674b8bbfcf-c92kd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c92kd-eth0" Sep 10 04:53:06.186456 containerd[1503]: 2025-09-10 04:53:06.162 [INFO][4317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" Namespace="kube-system" Pod="coredns-674b8bbfcf-c92kd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c92kd-eth0" Sep 10 04:53:06.186456 containerd[1503]: 2025-09-10 04:53:06.162 [INFO][4317] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" Namespace="kube-system" Pod="coredns-674b8bbfcf-c92kd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c92kd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--c92kd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fabeda3a-c65d-479a-a08f-68ff0fb2fbf3", ResourceVersion:"834", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53", Pod:"coredns-674b8bbfcf-c92kd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali22f89df4e5d", MAC:"c2:e4:c6:24:a5:c4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:06.186456 containerd[1503]: 2025-09-10 04:53:06.177 [INFO][4317] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" Namespace="kube-system" Pod="coredns-674b8bbfcf-c92kd" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--c92kd-eth0" Sep 10 04:53:06.189143 systemd[1]: Started cri-containerd-3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4.scope - libcontainer container 3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4. Sep 10 04:53:06.196261 containerd[1503]: time="2025-09-10T04:53:06.196211376Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c34b1e666a2572594561184f8f494bc4b389c766fcdcf8c776febaa3ff85ec39\" id:\"3d2f2c0eb636760d0d23997a2b67599a444edd60fc3478856d65b6867efd35f5\" pid:4588 exited_at:{seconds:1757479986 nanos:195393403}" Sep 10 04:53:06.201971 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:53:06.213483 containerd[1503]: time="2025-09-10T04:53:06.213434729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-fhkl2,Uid:0f6048bb-41e4-4c15-be39-3709646ca95d,Namespace:calico-system,Attempt:0,} returns sandbox id \"971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc\"" Sep 10 04:53:06.251612 containerd[1503]: time="2025-09-10T04:53:06.251553053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vhhp5,Uid:5133c5ff-97e8-40b7-b660-6b80eb06137d,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4\"" Sep 10 04:53:06.259886 containerd[1503]: time="2025-09-10T04:53:06.259846585Z" level=info msg="CreateContainer within sandbox \"3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 04:53:06.266703 containerd[1503]: time="2025-09-10T04:53:06.266662813Z" level=info msg="connecting to shim 37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53" address="unix:///run/containerd/s/23f6a8ed532a86f85fb9177679f20f36f589e85172676874856a4a0cc42b799b" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:53:06.291747 containerd[1503]: time="2025-09-10T04:53:06.291707730Z" level=info msg="Container 665717947b01ef5ab9504139107e1005c77f73126284ca4f034516cced14db53: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:53:06.305266 systemd[1]: Started cri-containerd-37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53.scope - libcontainer container 37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53. Sep 10 04:53:06.306991 containerd[1503]: time="2025-09-10T04:53:06.306510605Z" level=info msg="CreateContainer within sandbox \"3bc10fd4a7d0bdab4b92dac04c32e035923121582614d77a01fe88fdc8527da4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"665717947b01ef5ab9504139107e1005c77f73126284ca4f034516cced14db53\"" Sep 10 04:53:06.308764 containerd[1503]: time="2025-09-10T04:53:06.308727800Z" level=info msg="StartContainer for \"665717947b01ef5ab9504139107e1005c77f73126284ca4f034516cced14db53\"" Sep 10 04:53:06.310995 containerd[1503]: time="2025-09-10T04:53:06.310751592Z" level=info msg="connecting to shim 665717947b01ef5ab9504139107e1005c77f73126284ca4f034516cced14db53" address="unix:///run/containerd/s/70044edfbea674bcaf0250aff29a492336c02d92c5fb1a9d5e98ff6a6e5e84ec" protocol=ttrpc version=3 Sep 10 04:53:06.315545 containerd[1503]: time="2025-09-10T04:53:06.315510987Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c34b1e666a2572594561184f8f494bc4b389c766fcdcf8c776febaa3ff85ec39\" id:\"733983a14899879abed2c55eb95f402dfe8cbebbbd54124900d4f782c489f880\" pid:4667 exited_at:{seconds:1757479986 nanos:315069100}" Sep 10 04:53:06.323757 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:53:06.341255 systemd[1]: Started cri-containerd-665717947b01ef5ab9504139107e1005c77f73126284ca4f034516cced14db53.scope - libcontainer container 665717947b01ef5ab9504139107e1005c77f73126284ca4f034516cced14db53. Sep 10 04:53:06.355693 containerd[1503]: time="2025-09-10T04:53:06.355557302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c92kd,Uid:fabeda3a-c65d-479a-a08f-68ff0fb2fbf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53\"" Sep 10 04:53:06.362255 containerd[1503]: time="2025-09-10T04:53:06.362215488Z" level=info msg="CreateContainer within sandbox \"37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 04:53:06.372324 containerd[1503]: time="2025-09-10T04:53:06.372280487Z" level=info msg="Container b86319fc7ac59de2ab6a209a6567ea7ebee1d9f9e17e241f5368091748ed19ac: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:53:06.379995 containerd[1503]: time="2025-09-10T04:53:06.379898968Z" level=info msg="StartContainer for \"665717947b01ef5ab9504139107e1005c77f73126284ca4f034516cced14db53\" returns successfully" Sep 10 04:53:06.381420 containerd[1503]: time="2025-09-10T04:53:06.381373392Z" level=info msg="CreateContainer within sandbox \"37c667e0157326b60cc9d6d296ce05fedd34c112b6142ac5705c4838a99e5d53\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b86319fc7ac59de2ab6a209a6567ea7ebee1d9f9e17e241f5368091748ed19ac\"" Sep 10 04:53:06.381889 containerd[1503]: time="2025-09-10T04:53:06.381863719Z" level=info msg="StartContainer for \"b86319fc7ac59de2ab6a209a6567ea7ebee1d9f9e17e241f5368091748ed19ac\"" Sep 10 04:53:06.384367 containerd[1503]: time="2025-09-10T04:53:06.384333879Z" level=info msg="connecting to shim b86319fc7ac59de2ab6a209a6567ea7ebee1d9f9e17e241f5368091748ed19ac" address="unix:///run/containerd/s/23f6a8ed532a86f85fb9177679f20f36f589e85172676874856a4a0cc42b799b" protocol=ttrpc version=3 Sep 10 04:53:06.418275 systemd[1]: Started cri-containerd-b86319fc7ac59de2ab6a209a6567ea7ebee1d9f9e17e241f5368091748ed19ac.scope - libcontainer container b86319fc7ac59de2ab6a209a6567ea7ebee1d9f9e17e241f5368091748ed19ac. Sep 10 04:53:06.488084 containerd[1503]: time="2025-09-10T04:53:06.487475234Z" level=info msg="StartContainer for \"b86319fc7ac59de2ab6a209a6567ea7ebee1d9f9e17e241f5368091748ed19ac\" returns successfully" Sep 10 04:53:06.570760 containerd[1503]: time="2025-09-10T04:53:06.570611752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbzqr,Uid:f6ad6904-9c3f-4923-ac04-3f47e122a4d0,Namespace:calico-system,Attempt:0,}" Sep 10 04:53:06.692114 systemd-networkd[1432]: cali36b286d5a80: Gained IPv6LL Sep 10 04:53:06.714384 systemd-networkd[1432]: califead1464ffa: Link UP Sep 10 04:53:06.714941 systemd-networkd[1432]: califead1464ffa: Gained carrier Sep 10 04:53:06.733018 kubelet[2663]: I0910 04:53:06.732956 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vhhp5" podStartSLOduration=35.732919486 podStartE2EDuration="35.732919486s" podCreationTimestamp="2025-09-10 04:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:53:06.730089881 +0000 UTC m=+42.249768954" watchObservedRunningTime="2025-09-10 04:53:06.732919486 +0000 UTC m=+42.252598479" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.638 [INFO][4795] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--cbzqr-eth0 csi-node-driver- calico-system f6ad6904-9c3f-4923-ac04-3f47e122a4d0 735 0 2025-09-10 04:52:45 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-cbzqr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califead1464ffa [] [] }} ContainerID="7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" Namespace="calico-system" Pod="csi-node-driver-cbzqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--cbzqr-" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.638 [INFO][4795] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" Namespace="calico-system" Pod="csi-node-driver-cbzqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--cbzqr-eth0" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.664 [INFO][4808] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" HandleID="k8s-pod-network.7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" Workload="localhost-k8s-csi--node--driver--cbzqr-eth0" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.664 [INFO][4808] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" HandleID="k8s-pod-network.7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" Workload="localhost-k8s-csi--node--driver--cbzqr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001377a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-cbzqr", "timestamp":"2025-09-10 04:53:06.664082554 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.664 [INFO][4808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.664 [INFO][4808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.664 [INFO][4808] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.677 [INFO][4808] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" host="localhost" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.682 [INFO][4808] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.686 [INFO][4808] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.689 [INFO][4808] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.691 [INFO][4808] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.691 [INFO][4808] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" host="localhost" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.694 [INFO][4808] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12 Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.699 [INFO][4808] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" host="localhost" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.705 [INFO][4808] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" host="localhost" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.706 [INFO][4808] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" host="localhost" Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.706 [INFO][4808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 04:53:06.738557 containerd[1503]: 2025-09-10 04:53:06.706 [INFO][4808] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" HandleID="k8s-pod-network.7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" Workload="localhost-k8s-csi--node--driver--cbzqr-eth0" Sep 10 04:53:06.739444 containerd[1503]: 2025-09-10 04:53:06.711 [INFO][4795] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" Namespace="calico-system" Pod="csi-node-driver-cbzqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--cbzqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cbzqr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f6ad6904-9c3f-4923-ac04-3f47e122a4d0", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-cbzqr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califead1464ffa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:06.739444 containerd[1503]: 2025-09-10 04:53:06.711 [INFO][4795] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" Namespace="calico-system" Pod="csi-node-driver-cbzqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--cbzqr-eth0" Sep 10 04:53:06.739444 containerd[1503]: 2025-09-10 04:53:06.711 [INFO][4795] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califead1464ffa ContainerID="7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" Namespace="calico-system" Pod="csi-node-driver-cbzqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--cbzqr-eth0" Sep 10 04:53:06.739444 containerd[1503]: 2025-09-10 04:53:06.715 [INFO][4795] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" Namespace="calico-system" Pod="csi-node-driver-cbzqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--cbzqr-eth0" Sep 10 04:53:06.739444 containerd[1503]: 2025-09-10 04:53:06.717 [INFO][4795] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" Namespace="calico-system" Pod="csi-node-driver-cbzqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--cbzqr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--cbzqr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f6ad6904-9c3f-4923-ac04-3f47e122a4d0", ResourceVersion:"735", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 4, 52, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12", Pod:"csi-node-driver-cbzqr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califead1464ffa", MAC:"e2:b8:4a:50:ab:13", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 04:53:06.739444 containerd[1503]: 2025-09-10 04:53:06.731 [INFO][4795] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" Namespace="calico-system" Pod="csi-node-driver-cbzqr" WorkloadEndpoint="localhost-k8s-csi--node--driver--cbzqr-eth0" Sep 10 04:53:06.746241 kubelet[2663]: I0910 04:53:06.745690 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-c92kd" podStartSLOduration=35.745676208 podStartE2EDuration="35.745676208s" podCreationTimestamp="2025-09-10 04:52:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 04:53:06.74516884 +0000 UTC m=+42.264847913" watchObservedRunningTime="2025-09-10 04:53:06.745676208 +0000 UTC m=+42.265355201" Sep 10 04:53:06.777268 containerd[1503]: time="2025-09-10T04:53:06.777215588Z" level=info msg="connecting to shim 7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12" address="unix:///run/containerd/s/b05dabbc9f72d91d5b37bc6d6186a874e90c8f1b7b3309373b7d5ff86364ca75" namespace=k8s.io protocol=ttrpc version=3 Sep 10 04:53:06.816095 systemd[1]: Started cri-containerd-7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12.scope - libcontainer container 7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12. Sep 10 04:53:06.886449 systemd-resolved[1355]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 04:53:06.902375 containerd[1503]: time="2025-09-10T04:53:06.902339612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cbzqr,Uid:f6ad6904-9c3f-4923-ac04-3f47e122a4d0,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12\"" Sep 10 04:53:07.011157 systemd-networkd[1432]: calibd18a897e64: Gained IPv6LL Sep 10 04:53:07.035180 containerd[1503]: time="2025-09-10T04:53:07.035132584Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:07.035651 containerd[1503]: time="2025-09-10T04:53:07.035606431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 04:53:07.036618 containerd[1503]: time="2025-09-10T04:53:07.036590246Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:07.038523 containerd[1503]: time="2025-09-10T04:53:07.038483195Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:07.039016 containerd[1503]: time="2025-09-10T04:53:07.038986003Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.24946263s" Sep 10 04:53:07.039057 containerd[1503]: time="2025-09-10T04:53:07.039021684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 04:53:07.041144 containerd[1503]: time="2025-09-10T04:53:07.041117836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 04:53:07.044461 containerd[1503]: time="2025-09-10T04:53:07.044412887Z" level=info msg="CreateContainer within sandbox \"2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 04:53:07.052241 containerd[1503]: time="2025-09-10T04:53:07.052201847Z" level=info msg="Container 7de159190cc4ec84ba1b841ae586ad0aa9d4e64c0766979888c66da1dc10469a: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:53:07.058696 containerd[1503]: time="2025-09-10T04:53:07.058657347Z" level=info msg="CreateContainer within sandbox \"2f1a08c598d3b39718b75a2dab79442f81f37345fdfa3a93271db81a850e406e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7de159190cc4ec84ba1b841ae586ad0aa9d4e64c0766979888c66da1dc10469a\"" Sep 10 04:53:07.059313 containerd[1503]: time="2025-09-10T04:53:07.059278437Z" level=info msg="StartContainer for \"7de159190cc4ec84ba1b841ae586ad0aa9d4e64c0766979888c66da1dc10469a\"" Sep 10 04:53:07.060668 containerd[1503]: time="2025-09-10T04:53:07.060639458Z" level=info msg="connecting to shim 7de159190cc4ec84ba1b841ae586ad0aa9d4e64c0766979888c66da1dc10469a" address="unix:///run/containerd/s/69335f9e0cdebbd5171c756a04f4ecce12dd92fb22fe7da4b0a4dda7fc60ca66" protocol=ttrpc version=3 Sep 10 04:53:07.088143 systemd[1]: Started cri-containerd-7de159190cc4ec84ba1b841ae586ad0aa9d4e64c0766979888c66da1dc10469a.scope - libcontainer container 7de159190cc4ec84ba1b841ae586ad0aa9d4e64c0766979888c66da1dc10469a. Sep 10 04:53:07.122979 containerd[1503]: time="2025-09-10T04:53:07.122907660Z" level=info msg="StartContainer for \"7de159190cc4ec84ba1b841ae586ad0aa9d4e64c0766979888c66da1dc10469a\" returns successfully" Sep 10 04:53:07.203182 systemd-networkd[1432]: cali679d3cf4676: Gained IPv6LL Sep 10 04:53:07.459063 systemd-networkd[1432]: cali22f89df4e5d: Gained IPv6LL Sep 10 04:53:07.524052 systemd-networkd[1432]: cali1432d66c7b8: Gained IPv6LL Sep 10 04:53:07.715095 systemd-networkd[1432]: calidb33aa77f4a: Gained IPv6LL Sep 10 04:53:07.907064 systemd-networkd[1432]: califead1464ffa: Gained IPv6LL Sep 10 04:53:08.734695 kubelet[2663]: I0910 04:53:08.734661 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:53:08.942848 containerd[1503]: time="2025-09-10T04:53:08.942803463Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:08.943765 containerd[1503]: time="2025-09-10T04:53:08.943289470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 04:53:08.944187 containerd[1503]: time="2025-09-10T04:53:08.944156163Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:08.946727 containerd[1503]: time="2025-09-10T04:53:08.946698281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:08.947407 containerd[1503]: time="2025-09-10T04:53:08.947216129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.906066973s" Sep 10 04:53:08.947407 containerd[1503]: time="2025-09-10T04:53:08.947250730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 04:53:08.948485 containerd[1503]: time="2025-09-10T04:53:08.948294466Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 04:53:09.013923 containerd[1503]: time="2025-09-10T04:53:09.013552684Z" level=info msg="CreateContainer within sandbox \"b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 04:53:09.022627 containerd[1503]: time="2025-09-10T04:53:09.022578897Z" level=info msg="Container 07b8df0dc27479bac57e0043c9fdf44759715509fb86293a726aaa340388b526: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:53:09.030079 containerd[1503]: time="2025-09-10T04:53:09.030041767Z" level=info msg="CreateContainer within sandbox \"b85dd1da3373b0f8f9c35924054abc5cc427496fde4d04aa6c7a6eddb36b1d46\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"07b8df0dc27479bac57e0043c9fdf44759715509fb86293a726aaa340388b526\"" Sep 10 04:53:09.030856 containerd[1503]: time="2025-09-10T04:53:09.030823459Z" level=info msg="StartContainer for \"07b8df0dc27479bac57e0043c9fdf44759715509fb86293a726aaa340388b526\"" Sep 10 04:53:09.032331 containerd[1503]: time="2025-09-10T04:53:09.032300160Z" level=info msg="connecting to shim 07b8df0dc27479bac57e0043c9fdf44759715509fb86293a726aaa340388b526" address="unix:///run/containerd/s/f803e49e3ecacc9948d45d1c44181cd21a9c1d73758961d1f7a85a2c49354c85" protocol=ttrpc version=3 Sep 10 04:53:09.057096 systemd[1]: Started cri-containerd-07b8df0dc27479bac57e0043c9fdf44759715509fb86293a726aaa340388b526.scope - libcontainer container 07b8df0dc27479bac57e0043c9fdf44759715509fb86293a726aaa340388b526. Sep 10 04:53:09.099727 containerd[1503]: time="2025-09-10T04:53:09.099685272Z" level=info msg="StartContainer for \"07b8df0dc27479bac57e0043c9fdf44759715509fb86293a726aaa340388b526\" returns successfully" Sep 10 04:53:09.190081 containerd[1503]: time="2025-09-10T04:53:09.190018762Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:09.191119 containerd[1503]: time="2025-09-10T04:53:09.191035537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 04:53:09.195114 containerd[1503]: time="2025-09-10T04:53:09.194980195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 246.651888ms" Sep 10 04:53:09.195114 containerd[1503]: time="2025-09-10T04:53:09.195016555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 04:53:09.195946 containerd[1503]: time="2025-09-10T04:53:09.195904648Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 04:53:09.199851 containerd[1503]: time="2025-09-10T04:53:09.199809306Z" level=info msg="CreateContainer within sandbox \"d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 04:53:09.211360 containerd[1503]: time="2025-09-10T04:53:09.211291595Z" level=info msg="Container 23fce6b5b66968b0ac476e0cf0d47b5e4bd7befefb553addf4ae3aafafaa7a63: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:53:09.219364 containerd[1503]: time="2025-09-10T04:53:09.219315793Z" level=info msg="CreateContainer within sandbox \"d440c84a92caa3f1af29fa1f68826719fb4cd27ef84d6ddbe9a2fef15eef1260\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"23fce6b5b66968b0ac476e0cf0d47b5e4bd7befefb553addf4ae3aafafaa7a63\"" Sep 10 04:53:09.219990 containerd[1503]: time="2025-09-10T04:53:09.219874721Z" level=info msg="StartContainer for \"23fce6b5b66968b0ac476e0cf0d47b5e4bd7befefb553addf4ae3aafafaa7a63\"" Sep 10 04:53:09.221481 containerd[1503]: time="2025-09-10T04:53:09.221438744Z" level=info msg="connecting to shim 23fce6b5b66968b0ac476e0cf0d47b5e4bd7befefb553addf4ae3aafafaa7a63" address="unix:///run/containerd/s/07b340013b800d825dd273d633d5769864eb42b46d766f66adbf05c71bc6b10f" protocol=ttrpc version=3 Sep 10 04:53:09.245108 systemd[1]: Started cri-containerd-23fce6b5b66968b0ac476e0cf0d47b5e4bd7befefb553addf4ae3aafafaa7a63.scope - libcontainer container 23fce6b5b66968b0ac476e0cf0d47b5e4bd7befefb553addf4ae3aafafaa7a63. Sep 10 04:53:09.285852 containerd[1503]: time="2025-09-10T04:53:09.285719850Z" level=info msg="StartContainer for \"23fce6b5b66968b0ac476e0cf0d47b5e4bd7befefb553addf4ae3aafafaa7a63\" returns successfully" Sep 10 04:53:09.761868 kubelet[2663]: I0910 04:53:09.761736 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ddb8f475b-4t2gd" podStartSLOduration=26.602196088 podStartE2EDuration="29.761718175s" podCreationTimestamp="2025-09-10 04:52:40 +0000 UTC" firstStartedPulling="2025-09-10 04:53:06.036231799 +0000 UTC m=+41.555910832" lastFinishedPulling="2025-09-10 04:53:09.195753886 +0000 UTC m=+44.715432919" observedRunningTime="2025-09-10 04:53:09.760287834 +0000 UTC m=+45.279966867" watchObservedRunningTime="2025-09-10 04:53:09.761718175 +0000 UTC m=+45.281397208" Sep 10 04:53:09.762304 kubelet[2663]: I0910 04:53:09.761949 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5ddb8f475b-qmv9d" podStartSLOduration=27.510953645 podStartE2EDuration="29.761943139s" podCreationTimestamp="2025-09-10 04:52:40 +0000 UTC" firstStartedPulling="2025-09-10 04:53:04.789280089 +0000 UTC m=+40.308959122" lastFinishedPulling="2025-09-10 04:53:07.040269583 +0000 UTC m=+42.559948616" observedRunningTime="2025-09-10 04:53:07.743267405 +0000 UTC m=+43.262946478" watchObservedRunningTime="2025-09-10 04:53:09.761943139 +0000 UTC m=+45.281622172" Sep 10 04:53:10.238902 systemd[1]: Started sshd@8-10.0.0.60:22-10.0.0.1:55706.service - OpenSSH per-connection server daemon (10.0.0.1:55706). Sep 10 04:53:10.322357 sshd[5012]: Accepted publickey for core from 10.0.0.1 port 55706 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:10.323609 sshd-session[5012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:10.329391 systemd-logind[1476]: New session 9 of user core. Sep 10 04:53:10.338123 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 04:53:10.674173 sshd[5015]: Connection closed by 10.0.0.1 port 55706 Sep 10 04:53:10.674566 sshd-session[5012]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:10.679656 systemd[1]: sshd@8-10.0.0.60:22-10.0.0.1:55706.service: Deactivated successfully. Sep 10 04:53:10.683190 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 04:53:10.684384 systemd-logind[1476]: Session 9 logged out. Waiting for processes to exit. Sep 10 04:53:10.686206 systemd-logind[1476]: Removed session 9. Sep 10 04:53:10.744875 kubelet[2663]: I0910 04:53:10.744839 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:53:10.965833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1846036636.mount: Deactivated successfully. Sep 10 04:53:11.069359 kubelet[2663]: I0910 04:53:11.069295 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7f747d4549-pwpt8" podStartSLOduration=23.021108414 podStartE2EDuration="26.069276962s" podCreationTimestamp="2025-09-10 04:52:45 +0000 UTC" firstStartedPulling="2025-09-10 04:53:05.899899714 +0000 UTC m=+41.419578747" lastFinishedPulling="2025-09-10 04:53:08.948068262 +0000 UTC m=+44.467747295" observedRunningTime="2025-09-10 04:53:09.77630111 +0000 UTC m=+45.295980143" watchObservedRunningTime="2025-09-10 04:53:11.069276962 +0000 UTC m=+46.588955995" Sep 10 04:53:11.522792 containerd[1503]: time="2025-09-10T04:53:11.522698743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:11.523586 containerd[1503]: time="2025-09-10T04:53:11.523520754Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 04:53:11.524955 containerd[1503]: time="2025-09-10T04:53:11.524740131Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:11.527336 containerd[1503]: time="2025-09-10T04:53:11.527292607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:11.528210 containerd[1503]: time="2025-09-10T04:53:11.528177140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.332224691s" Sep 10 04:53:11.528289 containerd[1503]: time="2025-09-10T04:53:11.528210940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 04:53:11.530174 containerd[1503]: time="2025-09-10T04:53:11.530147768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 04:53:11.533373 containerd[1503]: time="2025-09-10T04:53:11.533319572Z" level=info msg="CreateContainer within sandbox \"971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 04:53:11.540287 containerd[1503]: time="2025-09-10T04:53:11.540250910Z" level=info msg="Container 3cbda0b62df3d84967fdf96514228685a440731cf6776bf305f93bb6f9be56cd: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:53:11.548977 containerd[1503]: time="2025-09-10T04:53:11.548910392Z" level=info msg="CreateContainer within sandbox \"971c690d1169980502cf079c7b0cca9e4716624773cb7ce6a01c03d3e35c88bc\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3cbda0b62df3d84967fdf96514228685a440731cf6776bf305f93bb6f9be56cd\"" Sep 10 04:53:11.549727 containerd[1503]: time="2025-09-10T04:53:11.549680362Z" level=info msg="StartContainer for \"3cbda0b62df3d84967fdf96514228685a440731cf6776bf305f93bb6f9be56cd\"" Sep 10 04:53:11.551166 containerd[1503]: time="2025-09-10T04:53:11.551106903Z" level=info msg="connecting to shim 3cbda0b62df3d84967fdf96514228685a440731cf6776bf305f93bb6f9be56cd" address="unix:///run/containerd/s/37b0d63ea15ad563f534fdbd1f616b3dfb86e53fb48d45f84280f01d5e13abc6" protocol=ttrpc version=3 Sep 10 04:53:11.581158 systemd[1]: Started cri-containerd-3cbda0b62df3d84967fdf96514228685a440731cf6776bf305f93bb6f9be56cd.scope - libcontainer container 3cbda0b62df3d84967fdf96514228685a440731cf6776bf305f93bb6f9be56cd. Sep 10 04:53:11.623651 containerd[1503]: time="2025-09-10T04:53:11.623609243Z" level=info msg="StartContainer for \"3cbda0b62df3d84967fdf96514228685a440731cf6776bf305f93bb6f9be56cd\" returns successfully" Sep 10 04:53:12.545762 containerd[1503]: time="2025-09-10T04:53:12.545717979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:12.546922 containerd[1503]: time="2025-09-10T04:53:12.546279467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 04:53:12.547504 containerd[1503]: time="2025-09-10T04:53:12.547468444Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:12.555541 containerd[1503]: time="2025-09-10T04:53:12.555507074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:12.556276 containerd[1503]: time="2025-09-10T04:53:12.556243084Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.025928954s" Sep 10 04:53:12.556276 containerd[1503]: time="2025-09-10T04:53:12.556274245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 04:53:12.560486 containerd[1503]: time="2025-09-10T04:53:12.560448262Z" level=info msg="CreateContainer within sandbox \"7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 04:53:12.571976 containerd[1503]: time="2025-09-10T04:53:12.571231891Z" level=info msg="Container 044d3cb22f914d7b02b8dc1005890e8b30e585ee840978ddbc32ed5ab333eb7b: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:53:12.587065 containerd[1503]: time="2025-09-10T04:53:12.587021789Z" level=info msg="CreateContainer within sandbox \"7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"044d3cb22f914d7b02b8dc1005890e8b30e585ee840978ddbc32ed5ab333eb7b\"" Sep 10 04:53:12.587861 containerd[1503]: time="2025-09-10T04:53:12.587824760Z" level=info msg="StartContainer for \"044d3cb22f914d7b02b8dc1005890e8b30e585ee840978ddbc32ed5ab333eb7b\"" Sep 10 04:53:12.589999 containerd[1503]: time="2025-09-10T04:53:12.589595144Z" level=info msg="connecting to shim 044d3cb22f914d7b02b8dc1005890e8b30e585ee840978ddbc32ed5ab333eb7b" address="unix:///run/containerd/s/b05dabbc9f72d91d5b37bc6d6186a874e90c8f1b7b3309373b7d5ff86364ca75" protocol=ttrpc version=3 Sep 10 04:53:12.619139 systemd[1]: Started cri-containerd-044d3cb22f914d7b02b8dc1005890e8b30e585ee840978ddbc32ed5ab333eb7b.scope - libcontainer container 044d3cb22f914d7b02b8dc1005890e8b30e585ee840978ddbc32ed5ab333eb7b. Sep 10 04:53:12.654659 containerd[1503]: time="2025-09-10T04:53:12.654619600Z" level=info msg="StartContainer for \"044d3cb22f914d7b02b8dc1005890e8b30e585ee840978ddbc32ed5ab333eb7b\" returns successfully" Sep 10 04:53:12.657138 containerd[1503]: time="2025-09-10T04:53:12.656182942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 04:53:12.858225 containerd[1503]: time="2025-09-10T04:53:12.858110324Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cbda0b62df3d84967fdf96514228685a440731cf6776bf305f93bb6f9be56cd\" id:\"7028b1fe6de39266244145b01a005435336d5e81f431c90221cffb3ef2cc7778\" pid:5127 exit_status:1 exited_at:{seconds:1757479992 nanos:857538076}" Sep 10 04:53:13.604208 containerd[1503]: time="2025-09-10T04:53:13.604134358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:13.605119 containerd[1503]: time="2025-09-10T04:53:13.604703086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 04:53:13.605614 containerd[1503]: time="2025-09-10T04:53:13.605580938Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:13.607961 containerd[1503]: time="2025-09-10T04:53:13.607922609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 04:53:13.608691 containerd[1503]: time="2025-09-10T04:53:13.608484497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 951.335342ms" Sep 10 04:53:13.608691 containerd[1503]: time="2025-09-10T04:53:13.608528578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 04:53:13.612867 containerd[1503]: time="2025-09-10T04:53:13.612828516Z" level=info msg="CreateContainer within sandbox \"7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 04:53:13.619520 containerd[1503]: time="2025-09-10T04:53:13.619480325Z" level=info msg="Container 29c2a2bf486f98916a3a5d9e0181fd46cf6b2aa6296f3979136534efce271c80: CDI devices from CRI Config.CDIDevices: []" Sep 10 04:53:13.629662 containerd[1503]: time="2025-09-10T04:53:13.629536261Z" level=info msg="CreateContainer within sandbox \"7bfb7a97bd4ec8c4391fc416a0439d95da6e174aada5c15dd765929c50805d12\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"29c2a2bf486f98916a3a5d9e0181fd46cf6b2aa6296f3979136534efce271c80\"" Sep 10 04:53:13.630244 containerd[1503]: time="2025-09-10T04:53:13.630216790Z" level=info msg="StartContainer for \"29c2a2bf486f98916a3a5d9e0181fd46cf6b2aa6296f3979136534efce271c80\"" Sep 10 04:53:13.631783 containerd[1503]: time="2025-09-10T04:53:13.631742251Z" level=info msg="connecting to shim 29c2a2bf486f98916a3a5d9e0181fd46cf6b2aa6296f3979136534efce271c80" address="unix:///run/containerd/s/b05dabbc9f72d91d5b37bc6d6186a874e90c8f1b7b3309373b7d5ff86364ca75" protocol=ttrpc version=3 Sep 10 04:53:13.670366 systemd[1]: Started cri-containerd-29c2a2bf486f98916a3a5d9e0181fd46cf6b2aa6296f3979136534efce271c80.scope - libcontainer container 29c2a2bf486f98916a3a5d9e0181fd46cf6b2aa6296f3979136534efce271c80. Sep 10 04:53:13.720006 containerd[1503]: time="2025-09-10T04:53:13.719956362Z" level=info msg="StartContainer for \"29c2a2bf486f98916a3a5d9e0181fd46cf6b2aa6296f3979136534efce271c80\" returns successfully" Sep 10 04:53:13.774811 kubelet[2663]: I0910 04:53:13.774742 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-fhkl2" podStartSLOduration=23.462642176 podStartE2EDuration="28.774723822s" podCreationTimestamp="2025-09-10 04:52:45 +0000 UTC" firstStartedPulling="2025-09-10 04:53:06.217325191 +0000 UTC m=+41.737004224" lastFinishedPulling="2025-09-10 04:53:11.529406837 +0000 UTC m=+47.049085870" observedRunningTime="2025-09-10 04:53:11.767770431 +0000 UTC m=+47.287449464" watchObservedRunningTime="2025-09-10 04:53:13.774723822 +0000 UTC m=+49.294402855" Sep 10 04:53:13.775220 kubelet[2663]: I0910 04:53:13.774967 2663 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cbzqr" podStartSLOduration=22.069733796 podStartE2EDuration="28.774962345s" podCreationTimestamp="2025-09-10 04:52:45 +0000 UTC" firstStartedPulling="2025-09-10 04:53:06.904201281 +0000 UTC m=+42.423880314" lastFinishedPulling="2025-09-10 04:53:13.60942983 +0000 UTC m=+49.129108863" observedRunningTime="2025-09-10 04:53:13.774376057 +0000 UTC m=+49.294055090" watchObservedRunningTime="2025-09-10 04:53:13.774962345 +0000 UTC m=+49.294641378" Sep 10 04:53:13.830689 containerd[1503]: time="2025-09-10T04:53:13.830650017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3cbda0b62df3d84967fdf96514228685a440731cf6776bf305f93bb6f9be56cd\" id:\"38f16b26942775ff1e528c638775bc16c025f136017e03bd646afcd28ecdbeea\" pid:5191 exit_status:1 exited_at:{seconds:1757479993 nanos:830309013}" Sep 10 04:53:14.468085 kubelet[2663]: I0910 04:53:14.468043 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:53:14.509631 containerd[1503]: time="2025-09-10T04:53:14.509593456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07b8df0dc27479bac57e0043c9fdf44759715509fb86293a726aaa340388b526\" id:\"2c3f9092a71a4cccdca424996919e5b4199252b81fd9c3a26d6a85b69e4cd5dd\" pid:5216 exited_at:{seconds:1757479994 nanos:509324692}" Sep 10 04:53:14.554268 containerd[1503]: time="2025-09-10T04:53:14.554230407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"07b8df0dc27479bac57e0043c9fdf44759715509fb86293a726aaa340388b526\" id:\"662feb00e0a4f3b0fedf99fe81a9036b1ddfd0afdd7301dd67e0e9237934e161\" pid:5238 exited_at:{seconds:1757479994 nanos:554034324}" Sep 10 04:53:14.644297 kubelet[2663]: I0910 04:53:14.644238 2663 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 04:53:14.644297 kubelet[2663]: I0910 04:53:14.644289 2663 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 04:53:15.693297 systemd[1]: Started sshd@9-10.0.0.60:22-10.0.0.1:55718.service - OpenSSH per-connection server daemon (10.0.0.1:55718). Sep 10 04:53:15.755351 sshd[5257]: Accepted publickey for core from 10.0.0.1 port 55718 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:15.757684 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:15.763015 systemd-logind[1476]: New session 10 of user core. Sep 10 04:53:15.771111 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 04:53:15.945032 sshd[5260]: Connection closed by 10.0.0.1 port 55718 Sep 10 04:53:15.944135 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:15.955121 systemd[1]: sshd@9-10.0.0.60:22-10.0.0.1:55718.service: Deactivated successfully. Sep 10 04:53:15.957309 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 04:53:15.959383 systemd-logind[1476]: Session 10 logged out. Waiting for processes to exit. Sep 10 04:53:15.961622 systemd[1]: Started sshd@10-10.0.0.60:22-10.0.0.1:55722.service - OpenSSH per-connection server daemon (10.0.0.1:55722). Sep 10 04:53:15.962693 systemd-logind[1476]: Removed session 10. Sep 10 04:53:16.018089 sshd[5277]: Accepted publickey for core from 10.0.0.1 port 55722 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:16.020150 sshd-session[5277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:16.027377 systemd-logind[1476]: New session 11 of user core. Sep 10 04:53:16.036134 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 04:53:16.280902 sshd[5280]: Connection closed by 10.0.0.1 port 55722 Sep 10 04:53:16.281748 sshd-session[5277]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:16.294538 systemd[1]: sshd@10-10.0.0.60:22-10.0.0.1:55722.service: Deactivated successfully. Sep 10 04:53:16.298543 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 04:53:16.300217 systemd-logind[1476]: Session 11 logged out. Waiting for processes to exit. Sep 10 04:53:16.304230 systemd[1]: Started sshd@11-10.0.0.60:22-10.0.0.1:55726.service - OpenSSH per-connection server daemon (10.0.0.1:55726). Sep 10 04:53:16.305236 systemd-logind[1476]: Removed session 11. Sep 10 04:53:16.361197 sshd[5292]: Accepted publickey for core from 10.0.0.1 port 55726 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:16.363057 sshd-session[5292]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:16.370225 systemd-logind[1476]: New session 12 of user core. Sep 10 04:53:16.374091 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 04:53:16.521849 sshd[5295]: Connection closed by 10.0.0.1 port 55726 Sep 10 04:53:16.522182 sshd-session[5292]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:16.525720 systemd-logind[1476]: Session 12 logged out. Waiting for processes to exit. Sep 10 04:53:16.526060 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 04:53:16.526799 systemd[1]: sshd@11-10.0.0.60:22-10.0.0.1:55726.service: Deactivated successfully. Sep 10 04:53:16.532354 systemd-logind[1476]: Removed session 12. Sep 10 04:53:21.535178 systemd[1]: Started sshd@12-10.0.0.60:22-10.0.0.1:52070.service - OpenSSH per-connection server daemon (10.0.0.1:52070). Sep 10 04:53:21.591837 sshd[5317]: Accepted publickey for core from 10.0.0.1 port 52070 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:21.592996 sshd-session[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:21.597002 systemd-logind[1476]: New session 13 of user core. Sep 10 04:53:21.604141 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 04:53:21.774116 sshd[5320]: Connection closed by 10.0.0.1 port 52070 Sep 10 04:53:21.774777 sshd-session[5317]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:21.789021 systemd[1]: sshd@12-10.0.0.60:22-10.0.0.1:52070.service: Deactivated successfully. Sep 10 04:53:21.791364 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 04:53:21.792073 systemd-logind[1476]: Session 13 logged out. Waiting for processes to exit. Sep 10 04:53:21.794379 systemd[1]: Started sshd@13-10.0.0.60:22-10.0.0.1:52074.service - OpenSSH per-connection server daemon (10.0.0.1:52074). Sep 10 04:53:21.794889 systemd-logind[1476]: Removed session 13. Sep 10 04:53:21.852280 sshd[5333]: Accepted publickey for core from 10.0.0.1 port 52074 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:21.853318 sshd-session[5333]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:21.857269 systemd-logind[1476]: New session 14 of user core. Sep 10 04:53:21.864082 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 04:53:22.109084 sshd[5336]: Connection closed by 10.0.0.1 port 52074 Sep 10 04:53:22.109884 sshd-session[5333]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:22.121146 systemd[1]: sshd@13-10.0.0.60:22-10.0.0.1:52074.service: Deactivated successfully. Sep 10 04:53:22.122834 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 04:53:22.123636 systemd-logind[1476]: Session 14 logged out. Waiting for processes to exit. Sep 10 04:53:22.125689 systemd[1]: Started sshd@14-10.0.0.60:22-10.0.0.1:52086.service - OpenSSH per-connection server daemon (10.0.0.1:52086). Sep 10 04:53:22.127243 systemd-logind[1476]: Removed session 14. Sep 10 04:53:22.183357 sshd[5348]: Accepted publickey for core from 10.0.0.1 port 52086 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:22.184585 sshd-session[5348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:22.188727 systemd-logind[1476]: New session 15 of user core. Sep 10 04:53:22.200081 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 04:53:22.989651 sshd[5352]: Connection closed by 10.0.0.1 port 52086 Sep 10 04:53:22.990081 sshd-session[5348]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:23.000840 systemd[1]: sshd@14-10.0.0.60:22-10.0.0.1:52086.service: Deactivated successfully. Sep 10 04:53:23.006900 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 04:53:23.008471 systemd-logind[1476]: Session 15 logged out. Waiting for processes to exit. Sep 10 04:53:23.013428 systemd-logind[1476]: Removed session 15. Sep 10 04:53:23.015504 systemd[1]: Started sshd@15-10.0.0.60:22-10.0.0.1:52096.service - OpenSSH per-connection server daemon (10.0.0.1:52096). Sep 10 04:53:23.076154 sshd[5374]: Accepted publickey for core from 10.0.0.1 port 52096 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:23.077499 sshd-session[5374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:23.081726 systemd-logind[1476]: New session 16 of user core. Sep 10 04:53:23.101133 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 04:53:23.428996 sshd[5377]: Connection closed by 10.0.0.1 port 52096 Sep 10 04:53:23.430295 sshd-session[5374]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:23.443345 systemd[1]: sshd@15-10.0.0.60:22-10.0.0.1:52096.service: Deactivated successfully. Sep 10 04:53:23.446711 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 04:53:23.448059 systemd-logind[1476]: Session 16 logged out. Waiting for processes to exit. Sep 10 04:53:23.453530 systemd[1]: Started sshd@16-10.0.0.60:22-10.0.0.1:52108.service - OpenSSH per-connection server daemon (10.0.0.1:52108). Sep 10 04:53:23.455076 systemd-logind[1476]: Removed session 16. Sep 10 04:53:23.506328 sshd[5389]: Accepted publickey for core from 10.0.0.1 port 52108 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:23.507714 sshd-session[5389]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:23.511593 systemd-logind[1476]: New session 17 of user core. Sep 10 04:53:23.523421 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 04:53:23.661408 sshd[5392]: Connection closed by 10.0.0.1 port 52108 Sep 10 04:53:23.661732 sshd-session[5389]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:23.665018 systemd[1]: sshd@16-10.0.0.60:22-10.0.0.1:52108.service: Deactivated successfully. Sep 10 04:53:23.666838 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 04:53:23.668397 systemd-logind[1476]: Session 17 logged out. Waiting for processes to exit. Sep 10 04:53:23.669644 systemd-logind[1476]: Removed session 17. Sep 10 04:53:24.082691 kubelet[2663]: I0910 04:53:24.082580 2663 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 04:53:28.680635 systemd[1]: Started sshd@17-10.0.0.60:22-10.0.0.1:52120.service - OpenSSH per-connection server daemon (10.0.0.1:52120). Sep 10 04:53:28.723916 sshd[5413]: Accepted publickey for core from 10.0.0.1 port 52120 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:28.726071 sshd-session[5413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:28.730397 systemd-logind[1476]: New session 18 of user core. Sep 10 04:53:28.742073 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 04:53:28.913877 sshd[5416]: Connection closed by 10.0.0.1 port 52120 Sep 10 04:53:28.914589 sshd-session[5413]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:28.918207 systemd[1]: sshd@17-10.0.0.60:22-10.0.0.1:52120.service: Deactivated successfully. Sep 10 04:53:28.920130 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 04:53:28.920974 systemd-logind[1476]: Session 18 logged out. Waiting for processes to exit. Sep 10 04:53:28.922304 systemd-logind[1476]: Removed session 18. Sep 10 04:53:33.925520 systemd[1]: Started sshd@18-10.0.0.60:22-10.0.0.1:37288.service - OpenSSH per-connection server daemon (10.0.0.1:37288). Sep 10 04:53:33.988836 sshd[5433]: Accepted publickey for core from 10.0.0.1 port 37288 ssh2: RSA SHA256:TxZsB57VU0YIK7wItd2C2XslFM7pe98my4A8Wre8waU Sep 10 04:53:33.990015 sshd-session[5433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 04:53:33.993971 systemd-logind[1476]: New session 19 of user core. Sep 10 04:53:34.002077 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 04:53:34.147011 sshd[5436]: Connection closed by 10.0.0.1 port 37288 Sep 10 04:53:34.147317 sshd-session[5433]: pam_unix(sshd:session): session closed for user core Sep 10 04:53:34.150047 systemd[1]: sshd@18-10.0.0.60:22-10.0.0.1:37288.service: Deactivated successfully. Sep 10 04:53:34.152522 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 04:53:34.153986 systemd-logind[1476]: Session 19 logged out. Waiting for processes to exit. Sep 10 04:53:34.154978 systemd-logind[1476]: Removed session 19.