Sep 10 23:52:32.775062 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 23:52:32.775083 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:24:03 -00 2025 Sep 10 23:52:32.775093 kernel: KASLR enabled Sep 10 23:52:32.775098 kernel: efi: EFI v2.7 by EDK II Sep 10 23:52:32.775103 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 10 23:52:32.775109 kernel: random: crng init done Sep 10 23:52:32.775115 kernel: secureboot: Secure boot disabled Sep 10 23:52:32.775121 kernel: ACPI: Early table checksum verification disabled Sep 10 23:52:32.775127 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 10 23:52:32.775134 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 23:52:32.775166 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:52:32.775173 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:52:32.775179 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:52:32.775185 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:52:32.775192 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:52:32.775199 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:52:32.775206 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:52:32.775212 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:52:32.775218 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:52:32.775224 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 23:52:32.775230 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 23:52:32.775236 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:52:32.775242 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 10 23:52:32.775248 kernel: Zone ranges: Sep 10 23:52:32.775254 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:52:32.775261 kernel: DMA32 empty Sep 10 23:52:32.775267 kernel: Normal empty Sep 10 23:52:32.775273 kernel: Device empty Sep 10 23:52:32.775279 kernel: Movable zone start for each node Sep 10 23:52:32.775285 kernel: Early memory node ranges Sep 10 23:52:32.775291 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 10 23:52:32.775297 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 10 23:52:32.775309 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 10 23:52:32.775316 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 10 23:52:32.775322 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 10 23:52:32.775328 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 10 23:52:32.775333 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 10 23:52:32.775341 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 10 23:52:32.775347 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 10 23:52:32.775353 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 23:52:32.775366 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 23:52:32.775373 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 23:52:32.775379 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 23:52:32.775387 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:52:32.775393 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 23:52:32.775400 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 10 23:52:32.775406 kernel: psci: probing for conduit method from ACPI. Sep 10 23:52:32.775413 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 23:52:32.775419 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:52:32.775425 kernel: psci: Trusted OS migration not required Sep 10 23:52:32.775432 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:52:32.775438 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 23:52:32.775445 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 23:52:32.775453 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 23:52:32.775459 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 23:52:32.775466 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:52:32.775472 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:52:32.775479 kernel: CPU features: detected: Spectre-v4 Sep 10 23:52:32.775485 kernel: CPU features: detected: Spectre-BHB Sep 10 23:52:32.775492 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 23:52:32.775498 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 23:52:32.775505 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 23:52:32.775511 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 23:52:32.775517 kernel: alternatives: applying boot alternatives Sep 10 23:52:32.775525 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:52:32.775533 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:52:32.775540 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:52:32.775546 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:52:32.775553 kernel: Fallback order for Node 0: 0 Sep 10 23:52:32.775560 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 10 23:52:32.775566 kernel: Policy zone: DMA Sep 10 23:52:32.775573 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:52:32.775579 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 10 23:52:32.775586 kernel: software IO TLB: area num 4. Sep 10 23:52:32.775592 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 10 23:52:32.775599 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 10 23:52:32.775607 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 23:52:32.775614 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:52:32.775621 kernel: rcu: RCU event tracing is enabled. Sep 10 23:52:32.775627 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 23:52:32.775634 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:52:32.775641 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:52:32.775647 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:52:32.775654 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 23:52:32.775660 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:52:32.775667 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:52:32.775673 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:52:32.775681 kernel: GICv3: 256 SPIs implemented Sep 10 23:52:32.775687 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:52:32.775693 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:52:32.775700 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 23:52:32.775706 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 23:52:32.775712 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 23:52:32.775719 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 23:52:32.775725 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:52:32.775732 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:52:32.775739 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 10 23:52:32.775745 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 10 23:52:32.775752 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:52:32.775759 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:52:32.775766 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 23:52:32.775772 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 23:52:32.775779 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 23:52:32.775785 kernel: arm-pv: using stolen time PV Sep 10 23:52:32.775792 kernel: Console: colour dummy device 80x25 Sep 10 23:52:32.775799 kernel: ACPI: Core revision 20240827 Sep 10 23:52:32.775805 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 23:52:32.775812 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:52:32.775818 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 23:52:32.775826 kernel: landlock: Up and running. Sep 10 23:52:32.775833 kernel: SELinux: Initializing. Sep 10 23:52:32.775839 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:52:32.775861 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:52:32.775867 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:52:32.775874 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:52:32.775880 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 23:52:32.775887 kernel: Remapping and enabling EFI services. Sep 10 23:52:32.775894 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:52:32.775906 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:52:32.775913 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 23:52:32.775920 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 10 23:52:32.775928 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:52:32.775935 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 23:52:32.775942 kernel: Detected PIPT I-cache on CPU2 Sep 10 23:52:32.775949 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 23:52:32.775956 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 10 23:52:32.775964 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:52:32.775971 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 23:52:32.775978 kernel: Detected PIPT I-cache on CPU3 Sep 10 23:52:32.775985 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 23:52:32.775991 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 10 23:52:32.775998 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:52:32.776005 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 23:52:32.776012 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 23:52:32.776019 kernel: SMP: Total of 4 processors activated. Sep 10 23:52:32.776027 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:52:32.776034 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:52:32.776041 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 23:52:32.776047 kernel: CPU features: detected: Common not Private translations Sep 10 23:52:32.776054 kernel: CPU features: detected: CRC32 instructions Sep 10 23:52:32.776061 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 23:52:32.776068 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 23:52:32.776075 kernel: CPU features: detected: LSE atomic instructions Sep 10 23:52:32.776081 kernel: CPU features: detected: Privileged Access Never Sep 10 23:52:32.776089 kernel: CPU features: detected: RAS Extension Support Sep 10 23:52:32.776096 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 23:52:32.776103 kernel: alternatives: applying system-wide alternatives Sep 10 23:52:32.776110 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 10 23:52:32.776117 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9084K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 10 23:52:32.776124 kernel: devtmpfs: initialized Sep 10 23:52:32.776131 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:52:32.776150 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 23:52:32.776157 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 23:52:32.776166 kernel: 0 pages in range for non-PLT usage Sep 10 23:52:32.776173 kernel: 508560 pages in range for PLT usage Sep 10 23:52:32.776179 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:52:32.776186 kernel: SMBIOS 3.0.0 present. Sep 10 23:52:32.776193 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 10 23:52:32.776200 kernel: DMI: Memory slots populated: 1/1 Sep 10 23:52:32.776207 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:52:32.776214 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:52:32.776221 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:52:32.776229 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:52:32.776236 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:52:32.776243 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 10 23:52:32.776250 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:52:32.776257 kernel: cpuidle: using governor menu Sep 10 23:52:32.776264 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:52:32.776270 kernel: ASID allocator initialised with 32768 entries Sep 10 23:52:32.776277 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:52:32.776284 kernel: Serial: AMBA PL011 UART driver Sep 10 23:52:32.776292 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:52:32.776299 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:52:32.776311 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:52:32.776318 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:52:32.776325 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:52:32.776331 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:52:32.776338 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:52:32.776345 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:52:32.776352 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:52:32.776359 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:52:32.776367 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:52:32.776374 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:52:32.776381 kernel: ACPI: Interpreter enabled Sep 10 23:52:32.776388 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:52:32.776394 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:52:32.776401 kernel: ACPI: CPU0 has been hot-added Sep 10 23:52:32.776408 kernel: ACPI: CPU1 has been hot-added Sep 10 23:52:32.776415 kernel: ACPI: CPU2 has been hot-added Sep 10 23:52:32.776422 kernel: ACPI: CPU3 has been hot-added Sep 10 23:52:32.776430 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 23:52:32.776437 kernel: printk: legacy console [ttyAMA0] enabled Sep 10 23:52:32.776444 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 23:52:32.776571 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:52:32.776638 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:52:32.776697 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:52:32.776755 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 23:52:32.776816 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 23:52:32.776825 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 23:52:32.776833 kernel: PCI host bridge to bus 0000:00 Sep 10 23:52:32.776899 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 23:52:32.776955 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:52:32.777009 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 23:52:32.777063 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 23:52:32.777176 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 10 23:52:32.777252 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 23:52:32.777325 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 10 23:52:32.777389 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 10 23:52:32.777449 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 23:52:32.777509 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 10 23:52:32.777570 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 10 23:52:32.777632 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 10 23:52:32.777687 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 23:52:32.777739 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:52:32.777791 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 23:52:32.777800 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:52:32.777808 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:52:32.777814 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:52:32.777823 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:52:32.777830 kernel: iommu: Default domain type: Translated Sep 10 23:52:32.777837 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:52:32.777844 kernel: efivars: Registered efivars operations Sep 10 23:52:32.777851 kernel: vgaarb: loaded Sep 10 23:52:32.777857 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:52:32.777864 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:52:32.777871 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:52:32.777878 kernel: pnp: PnP ACPI init Sep 10 23:52:32.777950 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 23:52:32.777960 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:52:32.777967 kernel: NET: Registered PF_INET protocol family Sep 10 23:52:32.777974 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:52:32.777981 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:52:32.777988 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:52:32.777995 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:52:32.778002 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:52:32.778011 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:52:32.778018 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:52:32.778025 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:52:32.778032 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:52:32.778038 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:52:32.778045 kernel: kvm [1]: HYP mode not available Sep 10 23:52:32.778052 kernel: Initialise system trusted keyrings Sep 10 23:52:32.778059 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:52:32.778066 kernel: Key type asymmetric registered Sep 10 23:52:32.778073 kernel: Asymmetric key parser 'x509' registered Sep 10 23:52:32.778081 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 23:52:32.778088 kernel: io scheduler mq-deadline registered Sep 10 23:52:32.778095 kernel: io scheduler kyber registered Sep 10 23:52:32.778102 kernel: io scheduler bfq registered Sep 10 23:52:32.778109 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:52:32.778116 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:52:32.778124 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:52:32.778236 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 23:52:32.778248 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:52:32.778258 kernel: thunder_xcv, ver 1.0 Sep 10 23:52:32.778265 kernel: thunder_bgx, ver 1.0 Sep 10 23:52:32.778272 kernel: nicpf, ver 1.0 Sep 10 23:52:32.778279 kernel: nicvf, ver 1.0 Sep 10 23:52:32.778362 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:52:32.778420 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:52:32 UTC (1757548352) Sep 10 23:52:32.778429 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:52:32.778436 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 10 23:52:32.778445 kernel: watchdog: NMI not fully supported Sep 10 23:52:32.778452 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:52:32.778459 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:52:32.778466 kernel: Segment Routing with IPv6 Sep 10 23:52:32.778473 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:52:32.778480 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:52:32.778486 kernel: Key type dns_resolver registered Sep 10 23:52:32.778493 kernel: registered taskstats version 1 Sep 10 23:52:32.778500 kernel: Loading compiled-in X.509 certificates Sep 10 23:52:32.778509 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 3c20aab1105575c84ea94c1a59a27813fcebdea7' Sep 10 23:52:32.778516 kernel: Demotion targets for Node 0: null Sep 10 23:52:32.778523 kernel: Key type .fscrypt registered Sep 10 23:52:32.778530 kernel: Key type fscrypt-provisioning registered Sep 10 23:52:32.778537 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:52:32.778544 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:52:32.778551 kernel: ima: No architecture policies found Sep 10 23:52:32.778558 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:52:32.778565 kernel: clk: Disabling unused clocks Sep 10 23:52:32.778573 kernel: PM: genpd: Disabling unused power domains Sep 10 23:52:32.778580 kernel: Warning: unable to open an initial console. Sep 10 23:52:32.778587 kernel: Freeing unused kernel memory: 38976K Sep 10 23:52:32.778594 kernel: Run /init as init process Sep 10 23:52:32.778601 kernel: with arguments: Sep 10 23:52:32.778608 kernel: /init Sep 10 23:52:32.778614 kernel: with environment: Sep 10 23:52:32.778621 kernel: HOME=/ Sep 10 23:52:32.778628 kernel: TERM=linux Sep 10 23:52:32.778636 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:52:32.778645 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:52:32.778654 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:52:32.778663 systemd[1]: Detected virtualization kvm. Sep 10 23:52:32.778671 systemd[1]: Detected architecture arm64. Sep 10 23:52:32.778678 systemd[1]: Running in initrd. Sep 10 23:52:32.778685 systemd[1]: No hostname configured, using default hostname. Sep 10 23:52:32.778695 systemd[1]: Hostname set to . Sep 10 23:52:32.778702 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:52:32.778709 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:52:32.778717 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:52:32.778724 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:52:32.778732 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:52:32.778740 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:52:32.778748 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:52:32.778758 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:52:32.778766 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:52:32.778774 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:52:32.778781 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:52:32.778788 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:52:32.778796 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:52:32.778803 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:52:32.778812 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:52:32.778819 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:52:32.778826 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:52:32.778834 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:52:32.778841 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:52:32.778848 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:52:32.778856 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:52:32.778863 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:52:32.778872 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:52:32.778880 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:52:32.778887 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:52:32.778895 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:52:32.778902 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:52:32.778910 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 23:52:32.778918 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:52:32.778925 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:52:32.778932 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:52:32.778941 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:52:32.778948 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:52:32.778956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:52:32.778964 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:52:32.778973 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:52:32.778994 systemd-journald[245]: Collecting audit messages is disabled. Sep 10 23:52:32.779013 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:52:32.779021 systemd-journald[245]: Journal started Sep 10 23:52:32.779041 systemd-journald[245]: Runtime Journal (/run/log/journal/43cc83e0c5fd414f944f9fb40b2be84f) is 6M, max 48.5M, 42.4M free. Sep 10 23:52:32.784191 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:52:32.784218 kernel: Bridge firewalling registered Sep 10 23:52:32.769231 systemd-modules-load[247]: Inserted module 'overlay' Sep 10 23:52:32.786888 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:52:32.784705 systemd-modules-load[247]: Inserted module 'br_netfilter' Sep 10 23:52:32.790487 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:52:32.790876 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:52:32.792161 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:52:32.796514 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:52:32.798082 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:52:32.801845 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:52:32.808420 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:52:32.810449 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:52:32.812576 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:52:32.814478 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 23:52:32.815493 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:52:32.817636 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:52:32.822039 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:52:32.827301 dracut-cmdline[281]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=dd9c14cce645c634e06a91b09405eea80057f02909b9267c482dc457df1cddec Sep 10 23:52:32.859087 systemd-resolved[295]: Positive Trust Anchors: Sep 10 23:52:32.859106 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:52:32.859152 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:52:32.863860 systemd-resolved[295]: Defaulting to hostname 'linux'. Sep 10 23:52:32.864776 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:52:32.868999 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:52:32.897162 kernel: SCSI subsystem initialized Sep 10 23:52:32.902153 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:52:32.910170 kernel: iscsi: registered transport (tcp) Sep 10 23:52:32.922235 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:52:32.922253 kernel: QLogic iSCSI HBA Driver Sep 10 23:52:32.938940 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:52:32.954164 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:52:32.956743 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:52:32.999193 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:52:33.001466 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:52:33.063170 kernel: raid6: neonx8 gen() 15793 MB/s Sep 10 23:52:33.080160 kernel: raid6: neonx4 gen() 15791 MB/s Sep 10 23:52:33.097169 kernel: raid6: neonx2 gen() 13264 MB/s Sep 10 23:52:33.114164 kernel: raid6: neonx1 gen() 10536 MB/s Sep 10 23:52:33.131169 kernel: raid6: int64x8 gen() 6889 MB/s Sep 10 23:52:33.148158 kernel: raid6: int64x4 gen() 7343 MB/s Sep 10 23:52:33.165161 kernel: raid6: int64x2 gen() 6096 MB/s Sep 10 23:52:33.182282 kernel: raid6: int64x1 gen() 5046 MB/s Sep 10 23:52:33.182309 kernel: raid6: using algorithm neonx8 gen() 15793 MB/s Sep 10 23:52:33.200327 kernel: raid6: .... xor() 12054 MB/s, rmw enabled Sep 10 23:52:33.200342 kernel: raid6: using neon recovery algorithm Sep 10 23:52:33.205158 kernel: xor: measuring software checksum speed Sep 10 23:52:33.206470 kernel: 8regs : 18515 MB/sec Sep 10 23:52:33.206486 kernel: 32regs : 21676 MB/sec Sep 10 23:52:33.207816 kernel: arm64_neon : 28022 MB/sec Sep 10 23:52:33.207831 kernel: xor: using function: arm64_neon (28022 MB/sec) Sep 10 23:52:33.259170 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:52:33.265892 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:52:33.268559 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:52:33.299791 systemd-udevd[497]: Using default interface naming scheme 'v255'. Sep 10 23:52:33.303856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:52:33.305899 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:52:33.332236 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation Sep 10 23:52:33.355215 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:52:33.357233 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:52:33.416182 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:52:33.418997 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:52:33.472536 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 23:52:33.478691 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:52:33.480729 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 23:52:33.478814 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:52:33.487180 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:52:33.487218 kernel: GPT:9289727 != 19775487 Sep 10 23:52:33.487236 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:52:33.487263 kernel: GPT:9289727 != 19775487 Sep 10 23:52:33.487274 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:52:33.487283 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:52:33.484078 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:52:33.489108 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:52:33.515721 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:52:33.524773 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 23:52:33.531395 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:52:33.539549 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 23:52:33.542105 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 23:52:33.551151 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 23:52:33.559732 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:52:33.560979 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:52:33.563258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:52:33.565329 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:52:33.568042 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:52:33.569938 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:52:33.591154 disk-uuid[591]: Primary Header is updated. Sep 10 23:52:33.591154 disk-uuid[591]: Secondary Entries is updated. Sep 10 23:52:33.591154 disk-uuid[591]: Secondary Header is updated. Sep 10 23:52:33.595825 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:52:33.599172 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:52:33.602165 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:52:34.605895 disk-uuid[596]: The operation has completed successfully. Sep 10 23:52:34.608005 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:52:34.635486 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:52:34.635588 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:52:34.664080 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:52:34.691171 sh[612]: Success Sep 10 23:52:34.704169 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:52:34.704210 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:52:34.704231 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 23:52:34.712183 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 23:52:34.741129 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:52:34.750455 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:52:34.752963 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:52:34.763169 kernel: BTRFS: device fsid 3b17f37f-d395-4116-a46d-e07f86112ade devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (625) Sep 10 23:52:34.763203 kernel: BTRFS info (device dm-0): first mount of filesystem 3b17f37f-d395-4116-a46d-e07f86112ade Sep 10 23:52:34.765299 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:52:34.769475 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:52:34.769493 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 23:52:34.770438 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:52:34.772624 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:52:34.774943 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:52:34.775738 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:52:34.778528 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:52:34.804172 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (657) Sep 10 23:52:34.804215 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:52:34.806262 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:52:34.809751 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:52:34.809781 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:52:34.814151 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:52:34.814750 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:52:34.817304 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:52:34.876465 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:52:34.879602 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:52:34.921324 systemd-networkd[797]: lo: Link UP Sep 10 23:52:34.922173 systemd-networkd[797]: lo: Gained carrier Sep 10 23:52:34.923584 systemd-networkd[797]: Enumeration completed Sep 10 23:52:34.922677 ignition[705]: Ignition 2.21.0 Sep 10 23:52:34.923731 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:52:34.922684 ignition[705]: Stage: fetch-offline Sep 10 23:52:34.924799 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:52:34.922714 ignition[705]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:52:34.924803 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:52:34.922721 ignition[705]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:52:34.925650 systemd-networkd[797]: eth0: Link UP Sep 10 23:52:34.922878 ignition[705]: parsed url from cmdline: "" Sep 10 23:52:34.925750 systemd-networkd[797]: eth0: Gained carrier Sep 10 23:52:34.922881 ignition[705]: no config URL provided Sep 10 23:52:34.925759 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:52:34.922885 ignition[705]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:52:34.926809 systemd[1]: Reached target network.target - Network. Sep 10 23:52:34.922892 ignition[705]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:52:34.922909 ignition[705]: op(1): [started] loading QEMU firmware config module Sep 10 23:52:34.922913 ignition[705]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 23:52:34.948181 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:52:34.936710 ignition[705]: op(1): [finished] loading QEMU firmware config module Sep 10 23:52:34.986833 ignition[705]: parsing config with SHA512: 7626ae618193c7217bfe036da84824d9617b1e880d3284002fff69393ececd439053c76163041079e120e80b8ba9ea973d36cdaea11524ba0f168ba351189c03 Sep 10 23:52:34.991158 unknown[705]: fetched base config from "system" Sep 10 23:52:34.991171 unknown[705]: fetched user config from "qemu" Sep 10 23:52:34.991549 ignition[705]: fetch-offline: fetch-offline passed Sep 10 23:52:34.991604 ignition[705]: Ignition finished successfully Sep 10 23:52:34.994523 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:52:34.995828 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 23:52:34.996604 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:52:35.032070 ignition[810]: Ignition 2.21.0 Sep 10 23:52:35.032089 ignition[810]: Stage: kargs Sep 10 23:52:35.032260 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:52:35.032269 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:52:35.034085 ignition[810]: kargs: kargs passed Sep 10 23:52:35.037781 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:52:35.034135 ignition[810]: Ignition finished successfully Sep 10 23:52:35.039825 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:52:35.064472 ignition[817]: Ignition 2.21.0 Sep 10 23:52:35.064488 ignition[817]: Stage: disks Sep 10 23:52:35.064622 ignition[817]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:52:35.064631 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:52:35.066012 ignition[817]: disks: disks passed Sep 10 23:52:35.066071 ignition[817]: Ignition finished successfully Sep 10 23:52:35.069400 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:52:35.070627 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:52:35.072214 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:52:35.074153 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:52:35.076044 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:52:35.078110 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:52:35.080753 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:52:35.103654 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 23:52:35.107619 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:52:35.109760 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:52:35.168157 kernel: EXT4-fs (vda9): mounted filesystem fcae628f-5f9a-4539-a638-93fb1399b5d7 r/w with ordered data mode. Quota mode: none. Sep 10 23:52:35.168498 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:52:35.169667 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:52:35.172952 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:52:35.175339 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:52:35.176300 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 23:52:35.176341 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:52:35.176364 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:52:35.193566 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:52:35.195490 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:52:35.200175 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (837) Sep 10 23:52:35.202496 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:52:35.202528 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:52:35.205941 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:52:35.205980 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:52:35.207123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:52:35.233835 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:52:35.237065 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:52:35.240855 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:52:35.244631 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:52:35.305423 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:52:35.307340 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:52:35.308819 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:52:35.331158 kernel: BTRFS info (device vda6): last unmount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:52:35.344069 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:52:35.350737 ignition[951]: INFO : Ignition 2.21.0 Sep 10 23:52:35.350737 ignition[951]: INFO : Stage: mount Sep 10 23:52:35.352299 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:52:35.352299 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:52:35.352299 ignition[951]: INFO : mount: mount passed Sep 10 23:52:35.352299 ignition[951]: INFO : Ignition finished successfully Sep 10 23:52:35.355196 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:52:35.357662 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:52:35.771304 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:52:35.772941 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:52:35.793210 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (964) Sep 10 23:52:35.793240 kernel: BTRFS info (device vda6): first mount of filesystem 538ffae8-60fb-4c82-9100-efc4d2404f73 Sep 10 23:52:35.793251 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:52:35.796744 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:52:35.796769 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:52:35.798190 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:52:35.829195 ignition[981]: INFO : Ignition 2.21.0 Sep 10 23:52:35.829195 ignition[981]: INFO : Stage: files Sep 10 23:52:35.831157 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:52:35.831157 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:52:35.831157 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:52:35.834560 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:52:35.834560 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:52:35.834560 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:52:35.834560 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:52:35.834560 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:52:35.834468 unknown[981]: wrote ssh authorized keys file for user: core Sep 10 23:52:35.867947 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 23:52:35.867947 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 10 23:52:35.884852 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:52:36.218179 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 10 23:52:36.220491 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:52:36.220491 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:52:36.220491 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:52:36.220491 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:52:36.220491 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:52:36.220491 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:52:36.220491 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:52:36.220491 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:52:36.234781 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:52:36.234781 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:52:36.234781 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:52:36.234781 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:52:36.234781 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:52:36.234781 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 10 23:52:36.361699 systemd-networkd[797]: eth0: Gained IPv6LL Sep 10 23:52:36.802953 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 23:52:37.382911 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 10 23:52:37.382911 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 23:52:37.387111 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:52:37.390817 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:52:37.390817 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 23:52:37.390817 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 23:52:37.390817 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:52:37.407211 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:52:37.407211 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 23:52:37.407211 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 23:52:37.419778 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:52:37.423101 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:52:37.426351 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 23:52:37.426351 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:52:37.426351 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:52:37.426351 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:52:37.426351 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:52:37.426351 ignition[981]: INFO : files: files passed Sep 10 23:52:37.426351 ignition[981]: INFO : Ignition finished successfully Sep 10 23:52:37.427218 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:52:37.429838 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:52:37.431955 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:52:37.440317 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:52:37.441615 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:52:37.445780 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 23:52:37.447184 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:52:37.447184 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:52:37.451595 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:52:37.448384 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:52:37.450976 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:52:37.453399 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:52:37.494001 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:52:37.494134 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:52:37.496525 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:52:37.498388 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:52:37.500221 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:52:37.501101 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:52:37.527995 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:52:37.530612 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:52:37.550270 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:52:37.551556 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:52:37.553654 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:52:37.555471 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:52:37.555600 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:52:37.558214 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:52:37.560215 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:52:37.562003 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:52:37.563743 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:52:37.565712 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:52:37.567691 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:52:37.569620 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:52:37.571513 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:52:37.573419 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:52:37.575369 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:52:37.577237 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:52:37.578916 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:52:37.579046 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:52:37.581528 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:52:37.583582 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:52:37.585501 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:52:37.586226 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:52:37.587743 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:52:37.587861 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:52:37.590671 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:52:37.590790 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:52:37.592730 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:52:37.594279 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:52:37.599202 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:52:37.600469 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:52:37.602578 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:52:37.604086 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:52:37.604192 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:52:37.605789 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:52:37.605870 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:52:37.607427 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:52:37.607547 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:52:37.609419 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:52:37.609526 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:52:37.611885 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:52:37.614558 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:52:37.615791 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:52:37.615918 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:52:37.617784 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:52:37.617891 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:52:37.623335 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:52:37.623417 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:52:37.633152 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:52:37.637546 ignition[1036]: INFO : Ignition 2.21.0 Sep 10 23:52:37.637546 ignition[1036]: INFO : Stage: umount Sep 10 23:52:37.639738 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:52:37.639738 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:52:37.642089 ignition[1036]: INFO : umount: umount passed Sep 10 23:52:37.642972 ignition[1036]: INFO : Ignition finished successfully Sep 10 23:52:37.643795 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:52:37.643912 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:52:37.645117 systemd[1]: Stopped target network.target - Network. Sep 10 23:52:37.646616 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:52:37.646677 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:52:37.648460 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:52:37.648508 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:52:37.650111 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:52:37.650171 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:52:37.651868 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:52:37.651909 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:52:37.653830 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:52:37.655601 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:52:37.662037 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:52:37.662182 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:52:37.665870 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:52:37.666844 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:52:37.666967 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:52:37.670736 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:52:37.670909 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 23:52:37.672087 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:52:37.672124 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:52:37.675151 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:52:37.676114 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:52:37.676241 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:52:37.678244 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:52:37.678299 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:52:37.681179 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:52:37.681220 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:52:37.683414 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:52:37.683459 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:52:37.686619 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:52:37.690323 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:52:37.690382 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:52:37.701655 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:52:37.701778 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:52:37.703334 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:52:37.703382 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:52:37.705439 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:52:37.705581 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:52:37.707513 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:52:37.709170 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:52:37.710843 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:52:37.710890 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:52:37.712092 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:52:37.712124 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:52:37.714161 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:52:37.714215 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:52:37.716863 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:52:37.716913 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:52:37.719801 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:52:37.719854 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:52:37.723616 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:52:37.724907 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 23:52:37.724968 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:52:37.727896 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:52:37.727941 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:52:37.731027 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 10 23:52:37.731070 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:52:37.734481 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:52:37.734529 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:52:37.736707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:52:37.736753 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:52:37.740966 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 10 23:52:37.741012 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 10 23:52:37.741040 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 10 23:52:37.741074 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:52:37.741630 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:52:37.743174 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:52:37.745470 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:52:37.747750 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:52:37.756961 systemd[1]: Switching root. Sep 10 23:52:37.790262 systemd-journald[245]: Journal stopped Sep 10 23:52:38.544932 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 10 23:52:38.544993 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:52:38.545008 kernel: SELinux: policy capability open_perms=1 Sep 10 23:52:38.545023 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:52:38.545035 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:52:38.545045 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:52:38.545057 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:52:38.545070 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:52:38.545079 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:52:38.545088 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 23:52:38.545098 kernel: audit: type=1403 audit(1757548357.951:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:52:38.545108 systemd[1]: Successfully loaded SELinux policy in 48.082ms. Sep 10 23:52:38.545124 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.151ms. Sep 10 23:52:38.545203 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:52:38.545218 systemd[1]: Detected virtualization kvm. Sep 10 23:52:38.545228 systemd[1]: Detected architecture arm64. Sep 10 23:52:38.545238 systemd[1]: Detected first boot. Sep 10 23:52:38.545248 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:52:38.545257 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:52:38.545267 zram_generator::config[1082]: No configuration found. Sep 10 23:52:38.545289 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:52:38.545304 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:52:38.545315 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:52:38.545325 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:52:38.545335 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:52:38.545345 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:52:38.545356 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:52:38.545366 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:52:38.545376 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:52:38.545386 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:52:38.545398 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:52:38.545408 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:52:38.545419 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:52:38.545429 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:52:38.545439 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:52:38.545449 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:52:38.545472 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:52:38.545482 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:52:38.545493 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:52:38.545504 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 23:52:38.545514 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:52:38.545524 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:52:38.545535 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:52:38.545544 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:52:38.545554 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:52:38.545565 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:52:38.545577 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:52:38.545589 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:52:38.545600 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:52:38.545610 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:52:38.545620 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:52:38.545630 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:52:38.545640 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:52:38.545650 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:52:38.545660 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:52:38.545671 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:52:38.545682 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:52:38.545692 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:52:38.545702 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:52:38.545713 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:52:38.545723 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:52:38.545733 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:52:38.545744 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:52:38.545754 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:52:38.545766 systemd[1]: Reached target machines.target - Containers. Sep 10 23:52:38.545776 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:52:38.545786 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:52:38.545796 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:52:38.545807 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:52:38.545817 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:52:38.545827 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:52:38.545838 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:52:38.545848 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:52:38.545859 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:52:38.545870 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:52:38.545882 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:52:38.545892 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:52:38.545902 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:52:38.545912 kernel: fuse: init (API version 7.41) Sep 10 23:52:38.545921 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:52:38.545932 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:52:38.545943 kernel: loop: module loaded Sep 10 23:52:38.545956 kernel: ACPI: bus type drm_connector registered Sep 10 23:52:38.545966 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:52:38.545976 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:52:38.545986 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:52:38.545997 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:52:38.546007 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:52:38.546017 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:52:38.546050 systemd-journald[1157]: Collecting audit messages is disabled. Sep 10 23:52:38.546071 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:52:38.546081 systemd[1]: Stopped verity-setup.service. Sep 10 23:52:38.546092 systemd-journald[1157]: Journal started Sep 10 23:52:38.546112 systemd-journald[1157]: Runtime Journal (/run/log/journal/43cc83e0c5fd414f944f9fb40b2be84f) is 6M, max 48.5M, 42.4M free. Sep 10 23:52:38.306882 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:52:38.327070 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 23:52:38.327450 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:52:38.550634 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:52:38.551248 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:52:38.552364 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:52:38.553607 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:52:38.554708 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:52:38.555902 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:52:38.557122 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:52:38.558331 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:52:38.559705 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:52:38.561176 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:52:38.561342 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:52:38.562705 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:52:38.562875 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:52:38.564317 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:52:38.564474 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:52:38.565724 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:52:38.565879 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:52:38.567376 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:52:38.567536 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:52:38.568987 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:52:38.569130 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:52:38.570487 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:52:38.571954 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:52:38.573509 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:52:38.574958 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:52:38.586943 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:52:38.589335 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:52:38.591339 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:52:38.592442 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:52:38.592470 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:52:38.594410 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:52:38.601243 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:52:38.602390 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:52:38.603289 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:52:38.605081 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:52:38.606364 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:52:38.607596 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:52:38.608741 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:52:38.612928 systemd-journald[1157]: Time spent on flushing to /var/log/journal/43cc83e0c5fd414f944f9fb40b2be84f is 18.845ms for 889 entries. Sep 10 23:52:38.612928 systemd-journald[1157]: System Journal (/var/log/journal/43cc83e0c5fd414f944f9fb40b2be84f) is 8M, max 195.6M, 187.6M free. Sep 10 23:52:38.650515 systemd-journald[1157]: Received client request to flush runtime journal. Sep 10 23:52:38.650564 kernel: loop0: detected capacity change from 0 to 207008 Sep 10 23:52:38.612307 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:52:38.617471 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:52:38.620319 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:52:38.625174 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:52:38.626683 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:52:38.627978 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:52:38.629643 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:52:38.634975 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:52:38.640325 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:52:38.641930 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:52:38.647649 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 10 23:52:38.647659 systemd-tmpfiles[1200]: ACLs are not supported, ignoring. Sep 10 23:52:38.653211 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:52:38.656333 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:52:38.658155 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:52:38.661679 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:52:38.676483 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:52:38.677734 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:52:38.690169 kernel: loop1: detected capacity change from 0 to 107312 Sep 10 23:52:38.697235 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:52:38.702270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:52:38.719570 kernel: loop2: detected capacity change from 0 to 138376 Sep 10 23:52:38.725812 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 10 23:52:38.725839 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 10 23:52:38.731192 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:52:38.742584 kernel: loop3: detected capacity change from 0 to 207008 Sep 10 23:52:38.757306 kernel: loop4: detected capacity change from 0 to 107312 Sep 10 23:52:38.763190 kernel: loop5: detected capacity change from 0 to 138376 Sep 10 23:52:38.768512 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 23:52:38.768863 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 10 23:52:38.774000 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:52:38.774023 systemd[1]: Reloading... Sep 10 23:52:38.838178 zram_generator::config[1250]: No configuration found. Sep 10 23:52:38.883767 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:52:38.912555 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:52:38.974490 systemd[1]: Reloading finished in 200 ms. Sep 10 23:52:38.990620 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:52:38.992055 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:52:39.003480 systemd[1]: Starting ensure-sysext.service... Sep 10 23:52:39.005182 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:52:39.013053 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:52:39.013068 systemd[1]: Reloading... Sep 10 23:52:39.020224 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 23:52:39.020533 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 23:52:39.020821 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:52:39.021085 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:52:39.021770 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:52:39.022058 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 10 23:52:39.022192 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 10 23:52:39.024925 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:52:39.025008 systemd-tmpfiles[1287]: Skipping /boot Sep 10 23:52:39.033625 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:52:39.033731 systemd-tmpfiles[1287]: Skipping /boot Sep 10 23:52:39.065181 zram_generator::config[1314]: No configuration found. Sep 10 23:52:39.125978 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:52:39.186565 systemd[1]: Reloading finished in 173 ms. Sep 10 23:52:39.207518 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:52:39.212917 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:52:39.227202 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:52:39.229322 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:52:39.231408 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:52:39.234379 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:52:39.236655 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:52:39.240834 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:52:39.245856 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:52:39.250620 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:52:39.252978 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:52:39.255331 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:52:39.257686 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:52:39.257808 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:52:39.259502 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:52:39.263181 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:52:39.265077 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:52:39.265324 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:52:39.267108 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:52:39.267346 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:52:39.276853 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:52:39.278317 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Sep 10 23:52:39.278873 augenrules[1382]: No rules Sep 10 23:52:39.279166 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:52:39.282741 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:52:39.283921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:52:39.284023 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:52:39.285375 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:52:39.287453 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:52:39.299600 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:52:39.301828 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:52:39.304849 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:52:39.306683 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:52:39.309898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:52:39.310128 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:52:39.312744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:52:39.313436 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:52:39.315646 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:52:39.316198 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:52:39.318644 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:52:39.346374 systemd[1]: Finished ensure-sysext.service. Sep 10 23:52:39.353320 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:52:39.354347 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:52:39.355249 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:52:39.357342 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:52:39.365498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:52:39.369336 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:52:39.372226 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:52:39.372267 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:52:39.374676 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:52:39.380236 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 23:52:39.381442 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:52:39.381929 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:52:39.382908 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:52:39.386286 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 23:52:39.392623 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:52:39.394077 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:52:39.403014 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:52:39.403939 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:52:39.404099 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:52:39.406566 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:52:39.406720 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:52:39.407952 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:52:39.417915 augenrules[1426]: /sbin/augenrules: No change Sep 10 23:52:39.422403 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:52:39.426404 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:52:39.430598 augenrules[1465]: No rules Sep 10 23:52:39.438868 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:52:39.441526 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:52:39.441726 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:52:39.456167 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:52:39.507465 systemd-resolved[1353]: Positive Trust Anchors: Sep 10 23:52:39.507480 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:52:39.507512 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:52:39.511195 systemd-networkd[1433]: lo: Link UP Sep 10 23:52:39.511206 systemd-networkd[1433]: lo: Gained carrier Sep 10 23:52:39.512027 systemd-networkd[1433]: Enumeration completed Sep 10 23:52:39.512125 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:52:39.515076 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:52:39.515087 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:52:39.515425 systemd-resolved[1353]: Defaulting to hostname 'linux'. Sep 10 23:52:39.515580 systemd-networkd[1433]: eth0: Link UP Sep 10 23:52:39.515694 systemd-networkd[1433]: eth0: Gained carrier Sep 10 23:52:39.515710 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:52:39.516255 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:52:39.519389 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:52:39.522381 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:52:39.523551 systemd[1]: Reached target network.target - Network. Sep 10 23:52:39.524521 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:52:39.525819 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 23:52:39.527130 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:52:39.528562 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:52:39.530181 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:52:39.531561 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:52:39.532877 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:52:39.532908 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:52:39.534505 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:52:39.537436 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:52:39.538602 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:52:39.539823 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:52:39.541800 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:52:39.542069 systemd-networkd[1433]: eth0: DHCPv4 address 10.0.0.82/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:52:39.544558 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:52:39.546302 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Sep 10 23:52:39.547243 systemd-timesyncd[1434]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 23:52:39.547391 systemd-timesyncd[1434]: Initial clock synchronization to Wed 2025-09-10 23:52:39.549205 UTC. Sep 10 23:52:39.548609 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:52:39.550540 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:52:39.551945 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:52:39.555095 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:52:39.556864 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:52:39.558886 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:52:39.564941 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:52:39.566223 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:52:39.567231 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:52:39.567324 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:52:39.568329 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:52:39.570384 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:52:39.572640 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:52:39.576290 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:52:39.578705 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:52:39.579766 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:52:39.580818 jq[1499]: false Sep 10 23:52:39.580811 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:52:39.582802 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:52:39.586357 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:52:39.589355 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:52:39.591412 extend-filesystems[1500]: Found /dev/vda6 Sep 10 23:52:39.593480 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:52:39.595914 extend-filesystems[1500]: Found /dev/vda9 Sep 10 23:52:39.597000 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:52:39.597519 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:52:39.598147 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:52:39.598254 extend-filesystems[1500]: Checking size of /dev/vda9 Sep 10 23:52:39.600371 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:52:39.607200 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:52:39.612161 jq[1520]: true Sep 10 23:52:39.609888 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:52:39.611666 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:52:39.611846 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:52:39.612203 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:52:39.612374 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:52:39.614709 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:52:39.614866 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:52:39.618621 extend-filesystems[1500]: Resized partition /dev/vda9 Sep 10 23:52:39.620745 extend-filesystems[1527]: resize2fs 1.47.2 (1-Jan-2025) Sep 10 23:52:39.632972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:52:39.633163 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 23:52:39.637212 jq[1528]: true Sep 10 23:52:39.638979 update_engine[1518]: I20250910 23:52:39.638849 1518 main.cc:92] Flatcar Update Engine starting Sep 10 23:52:39.648419 (ntainerd)[1529]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:52:39.649289 tar[1525]: linux-arm64/LICENSE Sep 10 23:52:39.649487 tar[1525]: linux-arm64/helm Sep 10 23:52:39.666396 dbus-daemon[1497]: [system] SELinux support is enabled Sep 10 23:52:39.666997 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:52:39.669161 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 23:52:39.672285 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:52:39.672312 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:52:39.686990 update_engine[1518]: I20250910 23:52:39.675635 1518 update_check_scheduler.cc:74] Next update check in 6m4s Sep 10 23:52:39.673660 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:52:39.673676 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:52:39.675584 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:52:39.684410 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:52:39.687080 systemd-logind[1514]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:52:39.690310 systemd-logind[1514]: New seat seat0. Sep 10 23:52:39.690986 extend-filesystems[1527]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 23:52:39.690986 extend-filesystems[1527]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 23:52:39.690986 extend-filesystems[1527]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 23:52:39.703683 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Sep 10 23:52:39.691894 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:52:39.706585 bash[1559]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:52:39.700548 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:52:39.703352 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:52:39.738002 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:52:39.739691 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:52:39.741052 locksmithd[1561]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:52:39.745502 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 23:52:39.835151 containerd[1529]: time="2025-09-10T23:52:39Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 23:52:39.836363 containerd[1529]: time="2025-09-10T23:52:39.836327640Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 10 23:52:39.848131 containerd[1529]: time="2025-09-10T23:52:39.848087440Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.44µs" Sep 10 23:52:39.848131 containerd[1529]: time="2025-09-10T23:52:39.848119040Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 23:52:39.848131 containerd[1529]: time="2025-09-10T23:52:39.848145880Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 23:52:39.848427 containerd[1529]: time="2025-09-10T23:52:39.848391640Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 23:52:39.848469 containerd[1529]: time="2025-09-10T23:52:39.848431640Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 23:52:39.848469 containerd[1529]: time="2025-09-10T23:52:39.848461160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:52:39.848539 containerd[1529]: time="2025-09-10T23:52:39.848520120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:52:39.848539 containerd[1529]: time="2025-09-10T23:52:39.848535640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:52:39.848783 containerd[1529]: time="2025-09-10T23:52:39.848752440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:52:39.848783 containerd[1529]: time="2025-09-10T23:52:39.848780160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:52:39.848833 containerd[1529]: time="2025-09-10T23:52:39.848791680Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:52:39.848833 containerd[1529]: time="2025-09-10T23:52:39.848799560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 23:52:39.848890 containerd[1529]: time="2025-09-10T23:52:39.848873440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 23:52:39.849099 containerd[1529]: time="2025-09-10T23:52:39.849060120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:52:39.849099 containerd[1529]: time="2025-09-10T23:52:39.849093160Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:52:39.849163 containerd[1529]: time="2025-09-10T23:52:39.849102760Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 23:52:39.851168 containerd[1529]: time="2025-09-10T23:52:39.849132600Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 23:52:39.851168 containerd[1529]: time="2025-09-10T23:52:39.849922600Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 23:52:39.851168 containerd[1529]: time="2025-09-10T23:52:39.850006560Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:52:39.853506 containerd[1529]: time="2025-09-10T23:52:39.853444440Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 23:52:39.853506 containerd[1529]: time="2025-09-10T23:52:39.853489960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 23:52:39.853590 containerd[1529]: time="2025-09-10T23:52:39.853521720Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 23:52:39.853590 containerd[1529]: time="2025-09-10T23:52:39.853534520Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 23:52:39.853590 containerd[1529]: time="2025-09-10T23:52:39.853547680Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 23:52:39.853590 containerd[1529]: time="2025-09-10T23:52:39.853560280Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 23:52:39.853590 containerd[1529]: time="2025-09-10T23:52:39.853574640Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 23:52:39.853590 containerd[1529]: time="2025-09-10T23:52:39.853585960Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 23:52:39.853702 containerd[1529]: time="2025-09-10T23:52:39.853595400Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 23:52:39.853702 containerd[1529]: time="2025-09-10T23:52:39.853605000Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 23:52:39.853702 containerd[1529]: time="2025-09-10T23:52:39.853613360Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 23:52:39.853702 containerd[1529]: time="2025-09-10T23:52:39.853630200Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 23:52:39.853767 containerd[1529]: time="2025-09-10T23:52:39.853730880Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 23:52:39.853767 containerd[1529]: time="2025-09-10T23:52:39.853749520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 23:52:39.853767 containerd[1529]: time="2025-09-10T23:52:39.853761920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 23:52:39.853813 containerd[1529]: time="2025-09-10T23:52:39.853774600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 23:52:39.853813 containerd[1529]: time="2025-09-10T23:52:39.853784560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 23:52:39.853813 containerd[1529]: time="2025-09-10T23:52:39.853793640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 23:52:39.853813 containerd[1529]: time="2025-09-10T23:52:39.853803520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 23:52:39.853813 containerd[1529]: time="2025-09-10T23:52:39.853812680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 23:52:39.853935 containerd[1529]: time="2025-09-10T23:52:39.853827960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 23:52:39.853935 containerd[1529]: time="2025-09-10T23:52:39.853838320Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 23:52:39.853935 containerd[1529]: time="2025-09-10T23:52:39.853847440Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 23:52:39.854204 containerd[1529]: time="2025-09-10T23:52:39.854172120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 23:52:39.854240 containerd[1529]: time="2025-09-10T23:52:39.854205640Z" level=info msg="Start snapshots syncer" Sep 10 23:52:39.854310 containerd[1529]: time="2025-09-10T23:52:39.854291080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 23:52:39.854845 containerd[1529]: time="2025-09-10T23:52:39.854753920Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 23:52:39.854949 containerd[1529]: time="2025-09-10T23:52:39.854857120Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 23:52:39.854994 containerd[1529]: time="2025-09-10T23:52:39.854973360Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 23:52:39.855202 containerd[1529]: time="2025-09-10T23:52:39.855169520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 23:52:39.855269 containerd[1529]: time="2025-09-10T23:52:39.855250600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 23:52:39.855306 containerd[1529]: time="2025-09-10T23:52:39.855280280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 23:52:39.855306 containerd[1529]: time="2025-09-10T23:52:39.855294520Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 23:52:39.855345 containerd[1529]: time="2025-09-10T23:52:39.855313920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 23:52:39.855384 containerd[1529]: time="2025-09-10T23:52:39.855366560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 23:52:39.855404 containerd[1529]: time="2025-09-10T23:52:39.855388640Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 23:52:39.855422 containerd[1529]: time="2025-09-10T23:52:39.855414280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 23:52:39.855439 containerd[1529]: time="2025-09-10T23:52:39.855425520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 23:52:39.855459 containerd[1529]: time="2025-09-10T23:52:39.855440360Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 23:52:39.855539 containerd[1529]: time="2025-09-10T23:52:39.855521560Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:52:39.855567 containerd[1529]: time="2025-09-10T23:52:39.855542280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:52:39.855567 containerd[1529]: time="2025-09-10T23:52:39.855551760Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:52:39.855567 containerd[1529]: time="2025-09-10T23:52:39.855560240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:52:39.855617 containerd[1529]: time="2025-09-10T23:52:39.855568880Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 23:52:39.855617 containerd[1529]: time="2025-09-10T23:52:39.855580480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 23:52:39.855617 containerd[1529]: time="2025-09-10T23:52:39.855589800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 23:52:39.855740 containerd[1529]: time="2025-09-10T23:52:39.855716600Z" level=info msg="runtime interface created" Sep 10 23:52:39.855740 containerd[1529]: time="2025-09-10T23:52:39.855731560Z" level=info msg="created NRI interface" Sep 10 23:52:39.855783 containerd[1529]: time="2025-09-10T23:52:39.855744440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 23:52:39.855783 containerd[1529]: time="2025-09-10T23:52:39.855756080Z" level=info msg="Connect containerd service" Sep 10 23:52:39.855816 containerd[1529]: time="2025-09-10T23:52:39.855789920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:52:39.856895 containerd[1529]: time="2025-09-10T23:52:39.856870960Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:52:39.936041 containerd[1529]: time="2025-09-10T23:52:39.935573640Z" level=info msg="Start subscribing containerd event" Sep 10 23:52:39.936041 containerd[1529]: time="2025-09-10T23:52:39.935656760Z" level=info msg="Start recovering state" Sep 10 23:52:39.936041 containerd[1529]: time="2025-09-10T23:52:39.935739440Z" level=info msg="Start event monitor" Sep 10 23:52:39.936041 containerd[1529]: time="2025-09-10T23:52:39.935754480Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:52:39.936041 containerd[1529]: time="2025-09-10T23:52:39.935762240Z" level=info msg="Start streaming server" Sep 10 23:52:39.936041 containerd[1529]: time="2025-09-10T23:52:39.935775200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 23:52:39.936041 containerd[1529]: time="2025-09-10T23:52:39.935782600Z" level=info msg="runtime interface starting up..." Sep 10 23:52:39.936041 containerd[1529]: time="2025-09-10T23:52:39.935788000Z" level=info msg="starting plugins..." Sep 10 23:52:39.936041 containerd[1529]: time="2025-09-10T23:52:39.935801920Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 23:52:39.936326 containerd[1529]: time="2025-09-10T23:52:39.936047600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:52:39.936326 containerd[1529]: time="2025-09-10T23:52:39.936205600Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:52:39.936453 containerd[1529]: time="2025-09-10T23:52:39.936433760Z" level=info msg="containerd successfully booted in 0.102187s" Sep 10 23:52:39.936522 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:52:40.059416 sshd_keygen[1524]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:52:40.063008 tar[1525]: linux-arm64/README.md Sep 10 23:52:40.078904 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:52:40.080555 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:52:40.083651 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:52:40.097636 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:52:40.097839 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:52:40.100534 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:52:40.119466 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:52:40.122192 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:52:40.124243 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 23:52:40.125548 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:52:41.289324 systemd-networkd[1433]: eth0: Gained IPv6LL Sep 10 23:52:41.291660 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:52:41.293422 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:52:41.296593 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 23:52:41.298737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:52:41.329536 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:52:41.344423 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 23:52:41.345637 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 23:52:41.347496 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:52:41.349298 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:52:41.859797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:52:41.861733 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:52:41.863372 (kubelet)[1635]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:52:41.863573 systemd[1]: Startup finished in 2.046s (kernel) + 5.350s (initrd) + 3.960s (userspace) = 11.357s. Sep 10 23:52:42.197188 kubelet[1635]: E0910 23:52:42.197052 1635 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:52:42.199430 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:52:42.199575 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:52:42.201221 systemd[1]: kubelet.service: Consumed 741ms CPU time, 256.8M memory peak. Sep 10 23:52:45.052910 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:52:45.054105 systemd[1]: Started sshd@0-10.0.0.82:22-10.0.0.1:54494.service - OpenSSH per-connection server daemon (10.0.0.1:54494). Sep 10 23:52:45.147527 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 54494 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:52:45.149416 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:52:45.155911 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:52:45.156924 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:52:45.163786 systemd-logind[1514]: New session 1 of user core. Sep 10 23:52:45.188548 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:52:45.191388 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:52:45.214578 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:52:45.217632 systemd-logind[1514]: New session c1 of user core. Sep 10 23:52:45.335550 systemd[1652]: Queued start job for default target default.target. Sep 10 23:52:45.357197 systemd[1652]: Created slice app.slice - User Application Slice. Sep 10 23:52:45.357227 systemd[1652]: Reached target paths.target - Paths. Sep 10 23:52:45.357266 systemd[1652]: Reached target timers.target - Timers. Sep 10 23:52:45.358609 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:52:45.367842 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:52:45.367918 systemd[1652]: Reached target sockets.target - Sockets. Sep 10 23:52:45.367957 systemd[1652]: Reached target basic.target - Basic System. Sep 10 23:52:45.368001 systemd[1652]: Reached target default.target - Main User Target. Sep 10 23:52:45.368031 systemd[1652]: Startup finished in 144ms. Sep 10 23:52:45.368228 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:52:45.369731 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:52:45.427302 systemd[1]: Started sshd@1-10.0.0.82:22-10.0.0.1:54508.service - OpenSSH per-connection server daemon (10.0.0.1:54508). Sep 10 23:52:45.490932 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 54508 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:52:45.492248 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:52:45.496367 systemd-logind[1514]: New session 2 of user core. Sep 10 23:52:45.514329 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:52:45.566615 sshd[1665]: Connection closed by 10.0.0.1 port 54508 Sep 10 23:52:45.567044 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Sep 10 23:52:45.576992 systemd[1]: sshd@1-10.0.0.82:22-10.0.0.1:54508.service: Deactivated successfully. Sep 10 23:52:45.578380 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:52:45.579120 systemd-logind[1514]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:52:45.581169 systemd[1]: Started sshd@2-10.0.0.82:22-10.0.0.1:54522.service - OpenSSH per-connection server daemon (10.0.0.1:54522). Sep 10 23:52:45.582028 systemd-logind[1514]: Removed session 2. Sep 10 23:52:45.633519 sshd[1671]: Accepted publickey for core from 10.0.0.1 port 54522 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:52:45.634787 sshd-session[1671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:52:45.639341 systemd-logind[1514]: New session 3 of user core. Sep 10 23:52:45.650321 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:52:45.700639 sshd[1673]: Connection closed by 10.0.0.1 port 54522 Sep 10 23:52:45.701065 sshd-session[1671]: pam_unix(sshd:session): session closed for user core Sep 10 23:52:45.710036 systemd[1]: sshd@2-10.0.0.82:22-10.0.0.1:54522.service: Deactivated successfully. Sep 10 23:52:45.712520 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:52:45.713159 systemd-logind[1514]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:52:45.715533 systemd[1]: Started sshd@3-10.0.0.82:22-10.0.0.1:54536.service - OpenSSH per-connection server daemon (10.0.0.1:54536). Sep 10 23:52:45.716464 systemd-logind[1514]: Removed session 3. Sep 10 23:52:45.769953 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 54536 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:52:45.771127 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:52:45.775578 systemd-logind[1514]: New session 4 of user core. Sep 10 23:52:45.787297 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:52:45.839668 sshd[1681]: Connection closed by 10.0.0.1 port 54536 Sep 10 23:52:45.840064 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Sep 10 23:52:45.851976 systemd[1]: sshd@3-10.0.0.82:22-10.0.0.1:54536.service: Deactivated successfully. Sep 10 23:52:45.854293 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:52:45.854983 systemd-logind[1514]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:52:45.857111 systemd[1]: Started sshd@4-10.0.0.82:22-10.0.0.1:54538.service - OpenSSH per-connection server daemon (10.0.0.1:54538). Sep 10 23:52:45.857721 systemd-logind[1514]: Removed session 4. Sep 10 23:52:45.910568 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 54538 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:52:45.911899 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:52:45.916204 systemd-logind[1514]: New session 5 of user core. Sep 10 23:52:45.928410 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:52:45.984733 sudo[1690]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:52:45.985024 sudo[1690]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:52:45.995754 sudo[1690]: pam_unix(sudo:session): session closed for user root Sep 10 23:52:45.997150 sshd[1689]: Connection closed by 10.0.0.1 port 54538 Sep 10 23:52:45.997647 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Sep 10 23:52:46.007069 systemd[1]: sshd@4-10.0.0.82:22-10.0.0.1:54538.service: Deactivated successfully. Sep 10 23:52:46.010351 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:52:46.011002 systemd-logind[1514]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:52:46.013302 systemd[1]: Started sshd@5-10.0.0.82:22-10.0.0.1:54550.service - OpenSSH per-connection server daemon (10.0.0.1:54550). Sep 10 23:52:46.014205 systemd-logind[1514]: Removed session 5. Sep 10 23:52:46.068314 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 54550 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:52:46.069481 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:52:46.074045 systemd-logind[1514]: New session 6 of user core. Sep 10 23:52:46.091289 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:52:46.143507 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:52:46.143770 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:52:46.148846 sudo[1700]: pam_unix(sudo:session): session closed for user root Sep 10 23:52:46.153371 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:52:46.153625 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:52:46.161571 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:52:46.202020 augenrules[1722]: No rules Sep 10 23:52:46.203116 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:52:46.204266 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:52:46.205099 sudo[1699]: pam_unix(sudo:session): session closed for user root Sep 10 23:52:46.206281 sshd[1698]: Connection closed by 10.0.0.1 port 54550 Sep 10 23:52:46.206610 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Sep 10 23:52:46.218907 systemd[1]: sshd@5-10.0.0.82:22-10.0.0.1:54550.service: Deactivated successfully. Sep 10 23:52:46.222372 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:52:46.223158 systemd-logind[1514]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:52:46.226545 systemd[1]: Started sshd@6-10.0.0.82:22-10.0.0.1:54564.service - OpenSSH per-connection server daemon (10.0.0.1:54564). Sep 10 23:52:46.227650 systemd-logind[1514]: Removed session 6. Sep 10 23:52:46.282440 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 54564 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:52:46.283571 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:52:46.287929 systemd-logind[1514]: New session 7 of user core. Sep 10 23:52:46.296296 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:52:46.351015 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:52:46.351475 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:52:46.647346 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:52:46.659445 (dockerd)[1755]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:52:46.879913 dockerd[1755]: time="2025-09-10T23:52:46.879712111Z" level=info msg="Starting up" Sep 10 23:52:46.881185 dockerd[1755]: time="2025-09-10T23:52:46.881162360Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 23:52:46.994315 dockerd[1755]: time="2025-09-10T23:52:46.994205166Z" level=info msg="Loading containers: start." Sep 10 23:52:47.002182 kernel: Initializing XFRM netlink socket Sep 10 23:52:47.187948 systemd-networkd[1433]: docker0: Link UP Sep 10 23:52:47.191423 dockerd[1755]: time="2025-09-10T23:52:47.191380887Z" level=info msg="Loading containers: done." Sep 10 23:52:47.204754 dockerd[1755]: time="2025-09-10T23:52:47.204698792Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:52:47.204906 dockerd[1755]: time="2025-09-10T23:52:47.204791959Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 10 23:52:47.204931 dockerd[1755]: time="2025-09-10T23:52:47.204902889Z" level=info msg="Initializing buildkit" Sep 10 23:52:47.229392 dockerd[1755]: time="2025-09-10T23:52:47.229331156Z" level=info msg="Completed buildkit initialization" Sep 10 23:52:47.236365 dockerd[1755]: time="2025-09-10T23:52:47.236314855Z" level=info msg="Daemon has completed initialization" Sep 10 23:52:47.236507 dockerd[1755]: time="2025-09-10T23:52:47.236413943Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:52:47.236634 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:52:47.889378 containerd[1529]: time="2025-09-10T23:52:47.889341526Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 10 23:52:47.973521 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck993835649-merged.mount: Deactivated successfully. Sep 10 23:52:48.574279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount376227168.mount: Deactivated successfully. Sep 10 23:52:49.662514 containerd[1529]: time="2025-09-10T23:52:49.662440392Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:49.715337 containerd[1529]: time="2025-09-10T23:52:49.715258885Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 10 23:52:49.729591 containerd[1529]: time="2025-09-10T23:52:49.729543687Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:49.744455 containerd[1529]: time="2025-09-10T23:52:49.744397250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:49.745935 containerd[1529]: time="2025-09-10T23:52:49.745897359Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.856514631s" Sep 10 23:52:49.745975 containerd[1529]: time="2025-09-10T23:52:49.745936162Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 10 23:52:49.746716 containerd[1529]: time="2025-09-10T23:52:49.746682017Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 10 23:52:50.756821 containerd[1529]: time="2025-09-10T23:52:50.756771616Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:50.757919 containerd[1529]: time="2025-09-10T23:52:50.757886812Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 10 23:52:50.758799 containerd[1529]: time="2025-09-10T23:52:50.758773752Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:50.762562 containerd[1529]: time="2025-09-10T23:52:50.762124622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:50.762904 containerd[1529]: time="2025-09-10T23:52:50.762871633Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.016158694s" Sep 10 23:52:50.762952 containerd[1529]: time="2025-09-10T23:52:50.762907635Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 10 23:52:50.763582 containerd[1529]: time="2025-09-10T23:52:50.763506676Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 10 23:52:51.873531 containerd[1529]: time="2025-09-10T23:52:51.873482041Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:51.874440 containerd[1529]: time="2025-09-10T23:52:51.874324295Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 10 23:52:51.875175 containerd[1529]: time="2025-09-10T23:52:51.875110945Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:51.877595 containerd[1529]: time="2025-09-10T23:52:51.877559902Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:51.879486 containerd[1529]: time="2025-09-10T23:52:51.879302894Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.115766616s" Sep 10 23:52:51.879486 containerd[1529]: time="2025-09-10T23:52:51.879344537Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 10 23:52:51.879929 containerd[1529]: time="2025-09-10T23:52:51.879908173Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 10 23:52:52.340275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:52:52.341848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:52:52.486032 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:52:52.490025 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:52:52.549100 kubelet[2042]: E0910 23:52:52.548731 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:52:52.559848 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:52:52.559971 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:52:52.560251 systemd[1]: kubelet.service: Consumed 152ms CPU time, 107.7M memory peak. Sep 10 23:52:52.913177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2553493908.mount: Deactivated successfully. Sep 10 23:52:53.127239 containerd[1529]: time="2025-09-10T23:52:53.127195660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:53.128031 containerd[1529]: time="2025-09-10T23:52:53.127999025Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 10 23:52:53.129168 containerd[1529]: time="2025-09-10T23:52:53.129065805Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:53.131013 containerd[1529]: time="2025-09-10T23:52:53.130970233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:53.131912 containerd[1529]: time="2025-09-10T23:52:53.131861283Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.251925108s" Sep 10 23:52:53.131912 containerd[1529]: time="2025-09-10T23:52:53.131899325Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 10 23:52:53.132491 containerd[1529]: time="2025-09-10T23:52:53.132469437Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 10 23:52:53.611341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1397854631.mount: Deactivated successfully. Sep 10 23:52:54.376062 containerd[1529]: time="2025-09-10T23:52:54.376003111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:54.377812 containerd[1529]: time="2025-09-10T23:52:54.377719201Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 10 23:52:54.379094 containerd[1529]: time="2025-09-10T23:52:54.378656651Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:54.381194 containerd[1529]: time="2025-09-10T23:52:54.381162983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:54.382563 containerd[1529]: time="2025-09-10T23:52:54.382528855Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.250029617s" Sep 10 23:52:54.382667 containerd[1529]: time="2025-09-10T23:52:54.382651822Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 10 23:52:54.383481 containerd[1529]: time="2025-09-10T23:52:54.383449944Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:52:54.803056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2738039430.mount: Deactivated successfully. Sep 10 23:52:54.808154 containerd[1529]: time="2025-09-10T23:52:54.808090534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:52:54.808986 containerd[1529]: time="2025-09-10T23:52:54.808953259Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 23:52:54.809964 containerd[1529]: time="2025-09-10T23:52:54.809929231Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:52:54.811925 containerd[1529]: time="2025-09-10T23:52:54.811881494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:52:54.812838 containerd[1529]: time="2025-09-10T23:52:54.812772301Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 429.286716ms" Sep 10 23:52:54.812838 containerd[1529]: time="2025-09-10T23:52:54.812806703Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:52:54.813423 containerd[1529]: time="2025-09-10T23:52:54.813387814Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 10 23:52:55.461712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4089296639.mount: Deactivated successfully. Sep 10 23:52:56.868890 containerd[1529]: time="2025-09-10T23:52:56.868825989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:56.872864 containerd[1529]: time="2025-09-10T23:52:56.872828495Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 10 23:52:56.874182 containerd[1529]: time="2025-09-10T23:52:56.873972268Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:56.879382 containerd[1529]: time="2025-09-10T23:52:56.879313676Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:52:56.880498 containerd[1529]: time="2025-09-10T23:52:56.880361405Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.066934629s" Sep 10 23:52:56.880498 containerd[1529]: time="2025-09-10T23:52:56.880397206Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 10 23:53:01.951695 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:53:01.951846 systemd[1]: kubelet.service: Consumed 152ms CPU time, 107.7M memory peak. Sep 10 23:53:01.953650 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:53:01.971424 systemd[1]: Reload requested from client PID 2197 ('systemctl') (unit session-7.scope)... Sep 10 23:53:01.971439 systemd[1]: Reloading... Sep 10 23:53:02.052164 zram_generator::config[2244]: No configuration found. Sep 10 23:53:02.148078 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:53:02.231759 systemd[1]: Reloading finished in 260 ms. Sep 10 23:53:02.275011 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:53:02.277249 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:53:02.278813 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:53:02.278998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:53:02.279038 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.1M memory peak. Sep 10 23:53:02.281421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:53:02.391502 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:53:02.404407 (kubelet)[2288]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:53:02.435613 kubelet[2288]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:53:02.435613 kubelet[2288]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:53:02.435613 kubelet[2288]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:53:02.435898 kubelet[2288]: I0910 23:53:02.435671 2288 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:53:03.105170 kubelet[2288]: I0910 23:53:03.104860 2288 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 23:53:03.105170 kubelet[2288]: I0910 23:53:03.104891 2288 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:53:03.105320 kubelet[2288]: I0910 23:53:03.105135 2288 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 23:53:03.127292 kubelet[2288]: E0910 23:53:03.127255 2288 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.82:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:53:03.128637 kubelet[2288]: I0910 23:53:03.128614 2288 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:53:03.134127 kubelet[2288]: I0910 23:53:03.134110 2288 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:53:03.137036 kubelet[2288]: I0910 23:53:03.136999 2288 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:53:03.137821 kubelet[2288]: I0910 23:53:03.137751 2288 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:53:03.138279 kubelet[2288]: I0910 23:53:03.137892 2288 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:53:03.138279 kubelet[2288]: I0910 23:53:03.138198 2288 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:53:03.138279 kubelet[2288]: I0910 23:53:03.138211 2288 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 23:53:03.138429 kubelet[2288]: I0910 23:53:03.138390 2288 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:53:03.140756 kubelet[2288]: I0910 23:53:03.140738 2288 kubelet.go:446] "Attempting to sync node with API server" Sep 10 23:53:03.140806 kubelet[2288]: I0910 23:53:03.140775 2288 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:53:03.140833 kubelet[2288]: I0910 23:53:03.140808 2288 kubelet.go:352] "Adding apiserver pod source" Sep 10 23:53:03.140833 kubelet[2288]: I0910 23:53:03.140826 2288 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:53:03.145192 kubelet[2288]: I0910 23:53:03.145168 2288 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:53:03.146168 kubelet[2288]: W0910 23:53:03.145950 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Sep 10 23:53:03.146168 kubelet[2288]: E0910 23:53:03.146012 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.82:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:53:03.146426 kubelet[2288]: W0910 23:53:03.146386 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Sep 10 23:53:03.146472 kubelet[2288]: E0910 23:53:03.146430 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.82:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:53:03.147006 kubelet[2288]: I0910 23:53:03.146975 2288 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:53:03.147143 kubelet[2288]: W0910 23:53:03.147123 2288 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:53:03.148373 kubelet[2288]: I0910 23:53:03.147971 2288 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:53:03.148373 kubelet[2288]: I0910 23:53:03.148007 2288 server.go:1287] "Started kubelet" Sep 10 23:53:03.151545 kubelet[2288]: I0910 23:53:03.151516 2288 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:53:03.151773 kubelet[2288]: I0910 23:53:03.151499 2288 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:53:03.151956 kubelet[2288]: I0910 23:53:03.151932 2288 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:53:03.152245 kubelet[2288]: I0910 23:53:03.152220 2288 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:53:03.152677 kubelet[2288]: I0910 23:53:03.152659 2288 server.go:479] "Adding debug handlers to kubelet server" Sep 10 23:53:03.152884 kubelet[2288]: I0910 23:53:03.152854 2288 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:53:03.152927 kubelet[2288]: E0910 23:53:03.152906 2288 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:53:03.153279 kubelet[2288]: E0910 23:53:03.153253 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:53:03.153329 kubelet[2288]: I0910 23:53:03.153288 2288 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:53:03.153466 kubelet[2288]: I0910 23:53:03.153440 2288 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:53:03.153510 kubelet[2288]: I0910 23:53:03.153489 2288 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:53:03.153794 kubelet[2288]: W0910 23:53:03.153743 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Sep 10 23:53:03.153849 kubelet[2288]: E0910 23:53:03.153796 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.82:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:53:03.153891 kubelet[2288]: E0910 23:53:03.152525 2288 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.82:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.82:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186410f14d190e75 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:53:03.147986549 +0000 UTC m=+0.740727500,LastTimestamp:2025-09-10 23:53:03.147986549 +0000 UTC m=+0.740727500,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:53:03.154036 kubelet[2288]: E0910 23:53:03.154011 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="200ms" Sep 10 23:53:03.155096 kubelet[2288]: I0910 23:53:03.155073 2288 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:53:03.155096 kubelet[2288]: I0910 23:53:03.155093 2288 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:53:03.155207 kubelet[2288]: I0910 23:53:03.155174 2288 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:53:03.169027 kubelet[2288]: I0910 23:53:03.168999 2288 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:53:03.169027 kubelet[2288]: I0910 23:53:03.169023 2288 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:53:03.169148 kubelet[2288]: I0910 23:53:03.169040 2288 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:53:03.169921 kubelet[2288]: I0910 23:53:03.169891 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:53:03.170964 kubelet[2288]: I0910 23:53:03.170929 2288 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:53:03.170964 kubelet[2288]: I0910 23:53:03.170951 2288 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 23:53:03.170964 kubelet[2288]: I0910 23:53:03.170965 2288 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:53:03.170964 kubelet[2288]: I0910 23:53:03.170971 2288 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 23:53:03.171099 kubelet[2288]: E0910 23:53:03.171007 2288 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:53:03.249521 kubelet[2288]: W0910 23:53:03.249411 2288 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.82:6443: connect: connection refused Sep 10 23:53:03.249521 kubelet[2288]: E0910 23:53:03.249486 2288 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.82:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.82:6443: connect: connection refused" logger="UnhandledError" Sep 10 23:53:03.250131 kubelet[2288]: I0910 23:53:03.250071 2288 policy_none.go:49] "None policy: Start" Sep 10 23:53:03.250131 kubelet[2288]: I0910 23:53:03.250102 2288 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:53:03.250131 kubelet[2288]: I0910 23:53:03.250116 2288 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:53:03.253403 kubelet[2288]: E0910 23:53:03.253377 2288 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:53:03.255755 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:53:03.268201 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:53:03.271169 kubelet[2288]: E0910 23:53:03.271127 2288 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 23:53:03.271336 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:53:03.286921 kubelet[2288]: I0910 23:53:03.286895 2288 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:53:03.287213 kubelet[2288]: I0910 23:53:03.287194 2288 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:53:03.287303 kubelet[2288]: I0910 23:53:03.287270 2288 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:53:03.287506 kubelet[2288]: I0910 23:53:03.287474 2288 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:53:03.288226 kubelet[2288]: E0910 23:53:03.288208 2288 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:53:03.288290 kubelet[2288]: E0910 23:53:03.288243 2288 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 23:53:03.354970 kubelet[2288]: E0910 23:53:03.354942 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="400ms" Sep 10 23:53:03.389216 kubelet[2288]: I0910 23:53:03.388981 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:53:03.389667 kubelet[2288]: E0910 23:53:03.389638 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Sep 10 23:53:03.479361 systemd[1]: Created slice kubepods-burstable-pod62bf92eb55477fec747c29fa808c9c9b.slice - libcontainer container kubepods-burstable-pod62bf92eb55477fec747c29fa808c9c9b.slice. Sep 10 23:53:03.503442 kubelet[2288]: E0910 23:53:03.503407 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:53:03.506125 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 10 23:53:03.523145 kubelet[2288]: E0910 23:53:03.523103 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:53:03.525376 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 10 23:53:03.526951 kubelet[2288]: E0910 23:53:03.526914 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:53:03.591032 kubelet[2288]: I0910 23:53:03.591011 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:53:03.591451 kubelet[2288]: E0910 23:53:03.591406 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Sep 10 23:53:03.654248 kubelet[2288]: I0910 23:53:03.654152 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:03.654248 kubelet[2288]: I0910 23:53:03.654185 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:03.654248 kubelet[2288]: I0910 23:53:03.654205 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:03.654248 kubelet[2288]: I0910 23:53:03.654222 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:03.654248 kubelet[2288]: I0910 23:53:03.654239 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:03.654405 kubelet[2288]: I0910 23:53:03.654257 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:03.654405 kubelet[2288]: I0910 23:53:03.654275 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:53:03.654405 kubelet[2288]: I0910 23:53:03.654290 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:03.654405 kubelet[2288]: I0910 23:53:03.654306 2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:03.755774 kubelet[2288]: E0910 23:53:03.755724 2288 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.82:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.82:6443: connect: connection refused" interval="800ms" Sep 10 23:53:03.803982 kubelet[2288]: E0910 23:53:03.803944 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:03.804739 containerd[1529]: time="2025-09-10T23:53:03.804458227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:62bf92eb55477fec747c29fa808c9c9b,Namespace:kube-system,Attempt:0,}" Sep 10 23:53:03.820787 containerd[1529]: time="2025-09-10T23:53:03.820754629Z" level=info msg="connecting to shim a330089242d8e5a1029659779c444eca751fc477d96dcce790d1d1203b761c37" address="unix:///run/containerd/s/64981594be38c07fac2c6b94a562b7dcd04e75ab7dcf063832612a98c4d2ab0b" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:03.824203 kubelet[2288]: E0910 23:53:03.824179 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:03.824577 containerd[1529]: time="2025-09-10T23:53:03.824550261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 10 23:53:03.827461 kubelet[2288]: E0910 23:53:03.827178 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:03.827526 containerd[1529]: time="2025-09-10T23:53:03.827486708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 10 23:53:03.844438 systemd[1]: Started cri-containerd-a330089242d8e5a1029659779c444eca751fc477d96dcce790d1d1203b761c37.scope - libcontainer container a330089242d8e5a1029659779c444eca751fc477d96dcce790d1d1203b761c37. Sep 10 23:53:03.850946 containerd[1529]: time="2025-09-10T23:53:03.850903000Z" level=info msg="connecting to shim 46dea5fcb345421f773855081783dfd308274c22a4be5e8bc10a3e618b335217" address="unix:///run/containerd/s/c16275b5650f92577b0a389b6f08daaf991e75243d4bb50d99323318b7fe75fd" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:03.851770 containerd[1529]: time="2025-09-10T23:53:03.851693583Z" level=info msg="connecting to shim ba370ca0852b23ef27a5881b5eb33c262d24cff2c9514010d606a78521667b50" address="unix:///run/containerd/s/452cf08eeff75922b889db1b2ac9ec8db38c501a27777140031b812f1f4aacdc" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:03.885453 systemd[1]: Started cri-containerd-46dea5fcb345421f773855081783dfd308274c22a4be5e8bc10a3e618b335217.scope - libcontainer container 46dea5fcb345421f773855081783dfd308274c22a4be5e8bc10a3e618b335217. Sep 10 23:53:03.887187 systemd[1]: Started cri-containerd-ba370ca0852b23ef27a5881b5eb33c262d24cff2c9514010d606a78521667b50.scope - libcontainer container ba370ca0852b23ef27a5881b5eb33c262d24cff2c9514010d606a78521667b50. Sep 10 23:53:03.889474 containerd[1529]: time="2025-09-10T23:53:03.889423498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:62bf92eb55477fec747c29fa808c9c9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a330089242d8e5a1029659779c444eca751fc477d96dcce790d1d1203b761c37\"" Sep 10 23:53:03.891896 kubelet[2288]: E0910 23:53:03.891869 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:03.895212 containerd[1529]: time="2025-09-10T23:53:03.894348323Z" level=info msg="CreateContainer within sandbox \"a330089242d8e5a1029659779c444eca751fc477d96dcce790d1d1203b761c37\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:53:03.902070 containerd[1529]: time="2025-09-10T23:53:03.902042071Z" level=info msg="Container f1ab417c79184b70f1e0e90c1dc1ffb0a50a358bcd3aaef0df00d952a0caa763: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:03.908371 containerd[1529]: time="2025-09-10T23:53:03.908285095Z" level=info msg="CreateContainer within sandbox \"a330089242d8e5a1029659779c444eca751fc477d96dcce790d1d1203b761c37\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f1ab417c79184b70f1e0e90c1dc1ffb0a50a358bcd3aaef0df00d952a0caa763\"" Sep 10 23:53:03.908999 containerd[1529]: time="2025-09-10T23:53:03.908834031Z" level=info msg="StartContainer for \"f1ab417c79184b70f1e0e90c1dc1ffb0a50a358bcd3aaef0df00d952a0caa763\"" Sep 10 23:53:03.910068 containerd[1529]: time="2025-09-10T23:53:03.909919144Z" level=info msg="connecting to shim f1ab417c79184b70f1e0e90c1dc1ffb0a50a358bcd3aaef0df00d952a0caa763" address="unix:///run/containerd/s/64981594be38c07fac2c6b94a562b7dcd04e75ab7dcf063832612a98c4d2ab0b" protocol=ttrpc version=3 Sep 10 23:53:03.927367 containerd[1529]: time="2025-09-10T23:53:03.927025089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"46dea5fcb345421f773855081783dfd308274c22a4be5e8bc10a3e618b335217\"" Sep 10 23:53:03.928560 kubelet[2288]: E0910 23:53:03.928535 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:03.929097 containerd[1529]: time="2025-09-10T23:53:03.929067029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba370ca0852b23ef27a5881b5eb33c262d24cff2c9514010d606a78521667b50\"" Sep 10 23:53:03.929550 kubelet[2288]: E0910 23:53:03.929494 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:03.930960 containerd[1529]: time="2025-09-10T23:53:03.930935845Z" level=info msg="CreateContainer within sandbox \"46dea5fcb345421f773855081783dfd308274c22a4be5e8bc10a3e618b335217\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:53:03.931632 containerd[1529]: time="2025-09-10T23:53:03.931579944Z" level=info msg="CreateContainer within sandbox \"ba370ca0852b23ef27a5881b5eb33c262d24cff2c9514010d606a78521667b50\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:53:03.933711 systemd[1]: Started cri-containerd-f1ab417c79184b70f1e0e90c1dc1ffb0a50a358bcd3aaef0df00d952a0caa763.scope - libcontainer container f1ab417c79184b70f1e0e90c1dc1ffb0a50a358bcd3aaef0df00d952a0caa763. Sep 10 23:53:03.940273 containerd[1529]: time="2025-09-10T23:53:03.939357053Z" level=info msg="Container 661857b29dd045258aeb0bcf1a557d1e9094fb5a382c39964ab901a7e4d5ab40: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:03.942782 containerd[1529]: time="2025-09-10T23:53:03.942742153Z" level=info msg="Container 798911f43efb8c69e5ad68c29a0a423970c482f018ede971adb7bf4367ba03b4: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:03.950166 containerd[1529]: time="2025-09-10T23:53:03.948741691Z" level=info msg="CreateContainer within sandbox \"46dea5fcb345421f773855081783dfd308274c22a4be5e8bc10a3e618b335217\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"661857b29dd045258aeb0bcf1a557d1e9094fb5a382c39964ab901a7e4d5ab40\"" Sep 10 23:53:03.950632 containerd[1529]: time="2025-09-10T23:53:03.950607706Z" level=info msg="StartContainer for \"661857b29dd045258aeb0bcf1a557d1e9094fb5a382c39964ab901a7e4d5ab40\"" Sep 10 23:53:03.952418 containerd[1529]: time="2025-09-10T23:53:03.952386598Z" level=info msg="CreateContainer within sandbox \"ba370ca0852b23ef27a5881b5eb33c262d24cff2c9514010d606a78521667b50\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"798911f43efb8c69e5ad68c29a0a423970c482f018ede971adb7bf4367ba03b4\"" Sep 10 23:53:03.952723 containerd[1529]: time="2025-09-10T23:53:03.952698928Z" level=info msg="StartContainer for \"798911f43efb8c69e5ad68c29a0a423970c482f018ede971adb7bf4367ba03b4\"" Sep 10 23:53:03.952800 containerd[1529]: time="2025-09-10T23:53:03.952772490Z" level=info msg="connecting to shim 661857b29dd045258aeb0bcf1a557d1e9094fb5a382c39964ab901a7e4d5ab40" address="unix:///run/containerd/s/c16275b5650f92577b0a389b6f08daaf991e75243d4bb50d99323318b7fe75fd" protocol=ttrpc version=3 Sep 10 23:53:03.953826 containerd[1529]: time="2025-09-10T23:53:03.953783760Z" level=info msg="connecting to shim 798911f43efb8c69e5ad68c29a0a423970c482f018ede971adb7bf4367ba03b4" address="unix:///run/containerd/s/452cf08eeff75922b889db1b2ac9ec8db38c501a27777140031b812f1f4aacdc" protocol=ttrpc version=3 Sep 10 23:53:03.978303 systemd[1]: Started cri-containerd-798911f43efb8c69e5ad68c29a0a423970c482f018ede971adb7bf4367ba03b4.scope - libcontainer container 798911f43efb8c69e5ad68c29a0a423970c482f018ede971adb7bf4367ba03b4. Sep 10 23:53:03.981687 systemd[1]: Started cri-containerd-661857b29dd045258aeb0bcf1a557d1e9094fb5a382c39964ab901a7e4d5ab40.scope - libcontainer container 661857b29dd045258aeb0bcf1a557d1e9094fb5a382c39964ab901a7e4d5ab40. Sep 10 23:53:03.987421 containerd[1529]: time="2025-09-10T23:53:03.987390233Z" level=info msg="StartContainer for \"f1ab417c79184b70f1e0e90c1dc1ffb0a50a358bcd3aaef0df00d952a0caa763\" returns successfully" Sep 10 23:53:03.993609 kubelet[2288]: I0910 23:53:03.993577 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:53:03.993977 kubelet[2288]: E0910 23:53:03.993948 2288 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.82:6443/api/v1/nodes\": dial tcp 10.0.0.82:6443: connect: connection refused" node="localhost" Sep 10 23:53:04.031165 containerd[1529]: time="2025-09-10T23:53:04.029698590Z" level=info msg="StartContainer for \"798911f43efb8c69e5ad68c29a0a423970c482f018ede971adb7bf4367ba03b4\" returns successfully" Sep 10 23:53:04.033456 containerd[1529]: time="2025-09-10T23:53:04.033391813Z" level=info msg="StartContainer for \"661857b29dd045258aeb0bcf1a557d1e9094fb5a382c39964ab901a7e4d5ab40\" returns successfully" Sep 10 23:53:04.179277 kubelet[2288]: E0910 23:53:04.179116 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:53:04.179866 kubelet[2288]: E0910 23:53:04.179263 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:04.181781 kubelet[2288]: E0910 23:53:04.181289 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:53:04.181781 kubelet[2288]: E0910 23:53:04.181406 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:04.183709 kubelet[2288]: E0910 23:53:04.183687 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:53:04.183794 kubelet[2288]: E0910 23:53:04.183776 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:04.795393 kubelet[2288]: I0910 23:53:04.795357 2288 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:53:05.185635 kubelet[2288]: E0910 23:53:05.185589 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:53:05.185744 kubelet[2288]: E0910 23:53:05.185686 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:53:05.185744 kubelet[2288]: E0910 23:53:05.185704 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:05.185868 kubelet[2288]: E0910 23:53:05.185813 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:05.186193 kubelet[2288]: E0910 23:53:05.186173 2288 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:53:05.186296 kubelet[2288]: E0910 23:53:05.186275 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:05.492688 kubelet[2288]: E0910 23:53:05.492582 2288 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 23:53:05.565174 kubelet[2288]: I0910 23:53:05.565127 2288 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:53:05.565174 kubelet[2288]: E0910 23:53:05.565175 2288 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 23:53:05.654573 kubelet[2288]: I0910 23:53:05.654522 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:05.661568 kubelet[2288]: E0910 23:53:05.661539 2288 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:05.661568 kubelet[2288]: I0910 23:53:05.661567 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:05.664158 kubelet[2288]: E0910 23:53:05.663234 2288 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:05.664218 kubelet[2288]: I0910 23:53:05.664206 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:53:05.665820 kubelet[2288]: E0910 23:53:05.665795 2288 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 23:53:06.142603 kubelet[2288]: I0910 23:53:06.142561 2288 apiserver.go:52] "Watching apiserver" Sep 10 23:53:06.154234 kubelet[2288]: I0910 23:53:06.154205 2288 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:53:07.314055 kubelet[2288]: I0910 23:53:07.314016 2288 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:07.318788 kubelet[2288]: E0910 23:53:07.318734 2288 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:07.329912 systemd[1]: Reload requested from client PID 2566 ('systemctl') (unit session-7.scope)... Sep 10 23:53:07.329925 systemd[1]: Reloading... Sep 10 23:53:07.387232 zram_generator::config[2612]: No configuration found. Sep 10 23:53:07.532353 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 10 23:53:07.628755 systemd[1]: Reloading finished in 298 ms. Sep 10 23:53:07.660325 kubelet[2288]: I0910 23:53:07.660064 2288 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:53:07.660382 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:53:07.681011 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:53:07.681302 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:53:07.681361 systemd[1]: kubelet.service: Consumed 1.095s CPU time, 128.6M memory peak. Sep 10 23:53:07.683303 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:53:07.832900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:53:07.854579 (kubelet)[2651]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:53:07.892266 kubelet[2651]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:53:07.892266 kubelet[2651]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:53:07.892266 kubelet[2651]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:53:07.892575 kubelet[2651]: I0910 23:53:07.892292 2651 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:53:07.898779 kubelet[2651]: I0910 23:53:07.898740 2651 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 10 23:53:07.898779 kubelet[2651]: I0910 23:53:07.898769 2651 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:53:07.899025 kubelet[2651]: I0910 23:53:07.899000 2651 server.go:954] "Client rotation is on, will bootstrap in background" Sep 10 23:53:07.900325 kubelet[2651]: I0910 23:53:07.900302 2651 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 10 23:53:07.902571 kubelet[2651]: I0910 23:53:07.902550 2651 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:53:07.906241 kubelet[2651]: I0910 23:53:07.906221 2651 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:53:07.909207 kubelet[2651]: I0910 23:53:07.909175 2651 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:53:07.909413 kubelet[2651]: I0910 23:53:07.909382 2651 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:53:07.909653 kubelet[2651]: I0910 23:53:07.909406 2651 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:53:07.909750 kubelet[2651]: I0910 23:53:07.909658 2651 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:53:07.909750 kubelet[2651]: I0910 23:53:07.909672 2651 container_manager_linux.go:304] "Creating device plugin manager" Sep 10 23:53:07.909750 kubelet[2651]: I0910 23:53:07.909714 2651 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:53:07.909863 kubelet[2651]: I0910 23:53:07.909844 2651 kubelet.go:446] "Attempting to sync node with API server" Sep 10 23:53:07.909863 kubelet[2651]: I0910 23:53:07.909863 2651 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:53:07.909927 kubelet[2651]: I0910 23:53:07.909901 2651 kubelet.go:352] "Adding apiserver pod source" Sep 10 23:53:07.909927 kubelet[2651]: I0910 23:53:07.909917 2651 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:53:07.910672 kubelet[2651]: I0910 23:53:07.910460 2651 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 10 23:53:07.912284 kubelet[2651]: I0910 23:53:07.912263 2651 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 10 23:53:07.912950 kubelet[2651]: I0910 23:53:07.912929 2651 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:53:07.913000 kubelet[2651]: I0910 23:53:07.912967 2651 server.go:1287] "Started kubelet" Sep 10 23:53:07.914143 kubelet[2651]: I0910 23:53:07.913152 2651 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:53:07.914183 kubelet[2651]: I0910 23:53:07.914175 2651 server.go:479] "Adding debug handlers to kubelet server" Sep 10 23:53:07.917424 kubelet[2651]: I0910 23:53:07.917338 2651 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:53:07.918081 kubelet[2651]: I0910 23:53:07.917940 2651 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:53:07.921205 kubelet[2651]: I0910 23:53:07.921057 2651 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:53:07.926117 kubelet[2651]: I0910 23:53:07.925903 2651 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:53:07.930186 kubelet[2651]: I0910 23:53:07.929553 2651 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:53:07.930186 kubelet[2651]: I0910 23:53:07.929632 2651 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:53:07.930186 kubelet[2651]: I0910 23:53:07.929710 2651 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:53:07.930856 kubelet[2651]: I0910 23:53:07.930822 2651 factory.go:221] Registration of the systemd container factory successfully Sep 10 23:53:07.930981 kubelet[2651]: I0910 23:53:07.930947 2651 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:53:07.931556 kubelet[2651]: E0910 23:53:07.930803 2651 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:53:07.932156 kubelet[2651]: I0910 23:53:07.932125 2651 factory.go:221] Registration of the containerd container factory successfully Sep 10 23:53:07.938207 kubelet[2651]: I0910 23:53:07.938174 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 10 23:53:07.940178 kubelet[2651]: I0910 23:53:07.940153 2651 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 10 23:53:07.940274 kubelet[2651]: I0910 23:53:07.940263 2651 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 10 23:53:07.940337 kubelet[2651]: I0910 23:53:07.940325 2651 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:53:07.940386 kubelet[2651]: I0910 23:53:07.940379 2651 kubelet.go:2382] "Starting kubelet main sync loop" Sep 10 23:53:07.940484 kubelet[2651]: E0910 23:53:07.940468 2651 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:53:07.969637 kubelet[2651]: I0910 23:53:07.969609 2651 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:53:07.969637 kubelet[2651]: I0910 23:53:07.969627 2651 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:53:07.969637 kubelet[2651]: I0910 23:53:07.969648 2651 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:53:07.969792 kubelet[2651]: I0910 23:53:07.969785 2651 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:53:07.969815 kubelet[2651]: I0910 23:53:07.969795 2651 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:53:07.969815 kubelet[2651]: I0910 23:53:07.969812 2651 policy_none.go:49] "None policy: Start" Sep 10 23:53:07.969860 kubelet[2651]: I0910 23:53:07.969819 2651 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:53:07.969860 kubelet[2651]: I0910 23:53:07.969828 2651 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:53:07.969950 kubelet[2651]: I0910 23:53:07.969933 2651 state_mem.go:75] "Updated machine memory state" Sep 10 23:53:07.973778 kubelet[2651]: I0910 23:53:07.973753 2651 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 10 23:53:07.973934 kubelet[2651]: I0910 23:53:07.973913 2651 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:53:07.973974 kubelet[2651]: I0910 23:53:07.973930 2651 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:53:07.974085 kubelet[2651]: I0910 23:53:07.974063 2651 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:53:07.976128 kubelet[2651]: E0910 23:53:07.976095 2651 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:53:08.041705 kubelet[2651]: I0910 23:53:08.041427 2651 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:08.041705 kubelet[2651]: I0910 23:53:08.041492 2651 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:08.041705 kubelet[2651]: I0910 23:53:08.041429 2651 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:53:08.047399 kubelet[2651]: E0910 23:53:08.047364 2651 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:08.078286 kubelet[2651]: I0910 23:53:08.078260 2651 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:53:08.084530 kubelet[2651]: I0910 23:53:08.084490 2651 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 23:53:08.084852 kubelet[2651]: I0910 23:53:08.084672 2651 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:53:08.131593 kubelet[2651]: I0910 23:53:08.131546 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:08.131593 kubelet[2651]: I0910 23:53:08.131591 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:08.131738 kubelet[2651]: I0910 23:53:08.131612 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:08.131738 kubelet[2651]: I0910 23:53:08.131631 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:08.131738 kubelet[2651]: I0910 23:53:08.131683 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:08.131738 kubelet[2651]: I0910 23:53:08.131736 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:53:08.131918 kubelet[2651]: I0910 23:53:08.131752 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:08.131918 kubelet[2651]: I0910 23:53:08.131768 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62bf92eb55477fec747c29fa808c9c9b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"62bf92eb55477fec747c29fa808c9c9b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:08.131918 kubelet[2651]: I0910 23:53:08.131787 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:53:08.347782 kubelet[2651]: E0910 23:53:08.347739 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:08.347876 kubelet[2651]: E0910 23:53:08.347812 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:08.347876 kubelet[2651]: E0910 23:53:08.347853 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:08.911644 kubelet[2651]: I0910 23:53:08.911609 2651 apiserver.go:52] "Watching apiserver" Sep 10 23:53:08.930640 kubelet[2651]: I0910 23:53:08.930596 2651 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:53:08.955513 kubelet[2651]: I0910 23:53:08.955377 2651 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:08.955513 kubelet[2651]: E0910 23:53:08.955511 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:08.955918 kubelet[2651]: E0910 23:53:08.955850 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:08.960163 kubelet[2651]: E0910 23:53:08.959858 2651 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 23:53:08.960163 kubelet[2651]: E0910 23:53:08.959989 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:08.976565 kubelet[2651]: I0910 23:53:08.976511 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.976445566 podStartE2EDuration="1.976445566s" podCreationTimestamp="2025-09-10 23:53:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:53:08.976292003 +0000 UTC m=+1.118269245" watchObservedRunningTime="2025-09-10 23:53:08.976445566 +0000 UTC m=+1.118422808" Sep 10 23:53:08.992773 kubelet[2651]: I0910 23:53:08.992726 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=0.992713954 podStartE2EDuration="992.713954ms" podCreationTimestamp="2025-09-10 23:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:53:08.992425548 +0000 UTC m=+1.134402790" watchObservedRunningTime="2025-09-10 23:53:08.992713954 +0000 UTC m=+1.134691196" Sep 10 23:53:08.992864 kubelet[2651]: I0910 23:53:08.992802 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.992798076 podStartE2EDuration="992.798076ms" podCreationTimestamp="2025-09-10 23:53:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:53:08.982916704 +0000 UTC m=+1.124893986" watchObservedRunningTime="2025-09-10 23:53:08.992798076 +0000 UTC m=+1.134775318" Sep 10 23:53:09.956720 kubelet[2651]: E0910 23:53:09.956694 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:09.957274 kubelet[2651]: E0910 23:53:09.957255 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:10.958535 kubelet[2651]: E0910 23:53:10.958506 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:11.960645 kubelet[2651]: E0910 23:53:11.960327 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:13.816656 kubelet[2651]: I0910 23:53:13.816614 2651 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:53:13.816992 containerd[1529]: time="2025-09-10T23:53:13.816902414Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:53:13.817196 kubelet[2651]: I0910 23:53:13.817073 2651 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:53:14.811824 systemd[1]: Created slice kubepods-besteffort-pod6ff93218_48c7_49c2_9179_e91cbbc88941.slice - libcontainer container kubepods-besteffort-pod6ff93218_48c7_49c2_9179_e91cbbc88941.slice. Sep 10 23:53:14.880917 kubelet[2651]: I0910 23:53:14.880854 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ff93218-48c7-49c2-9179-e91cbbc88941-lib-modules\") pod \"kube-proxy-ktgng\" (UID: \"6ff93218-48c7-49c2-9179-e91cbbc88941\") " pod="kube-system/kube-proxy-ktgng" Sep 10 23:53:14.880917 kubelet[2651]: I0910 23:53:14.880893 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bctb\" (UniqueName: \"kubernetes.io/projected/6ff93218-48c7-49c2-9179-e91cbbc88941-kube-api-access-8bctb\") pod \"kube-proxy-ktgng\" (UID: \"6ff93218-48c7-49c2-9179-e91cbbc88941\") " pod="kube-system/kube-proxy-ktgng" Sep 10 23:53:14.880917 kubelet[2651]: I0910 23:53:14.880924 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6ff93218-48c7-49c2-9179-e91cbbc88941-kube-proxy\") pod \"kube-proxy-ktgng\" (UID: \"6ff93218-48c7-49c2-9179-e91cbbc88941\") " pod="kube-system/kube-proxy-ktgng" Sep 10 23:53:14.881340 kubelet[2651]: I0910 23:53:14.880961 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ff93218-48c7-49c2-9179-e91cbbc88941-xtables-lock\") pod \"kube-proxy-ktgng\" (UID: \"6ff93218-48c7-49c2-9179-e91cbbc88941\") " pod="kube-system/kube-proxy-ktgng" Sep 10 23:53:14.935040 systemd[1]: Created slice kubepods-besteffort-podd51592f1_611e_44b7_bed5_5db4dd39fce9.slice - libcontainer container kubepods-besteffort-podd51592f1_611e_44b7_bed5_5db4dd39fce9.slice. Sep 10 23:53:14.981726 kubelet[2651]: I0910 23:53:14.981680 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-629ln\" (UniqueName: \"kubernetes.io/projected/d51592f1-611e-44b7-bed5-5db4dd39fce9-kube-api-access-629ln\") pod \"tigera-operator-755d956888-svdct\" (UID: \"d51592f1-611e-44b7-bed5-5db4dd39fce9\") " pod="tigera-operator/tigera-operator-755d956888-svdct" Sep 10 23:53:14.981846 kubelet[2651]: I0910 23:53:14.981743 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d51592f1-611e-44b7-bed5-5db4dd39fce9-var-lib-calico\") pod \"tigera-operator-755d956888-svdct\" (UID: \"d51592f1-611e-44b7-bed5-5db4dd39fce9\") " pod="tigera-operator/tigera-operator-755d956888-svdct" Sep 10 23:53:15.128280 kubelet[2651]: E0910 23:53:15.128132 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:15.128849 containerd[1529]: time="2025-09-10T23:53:15.128817140Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ktgng,Uid:6ff93218-48c7-49c2-9179-e91cbbc88941,Namespace:kube-system,Attempt:0,}" Sep 10 23:53:15.145770 containerd[1529]: time="2025-09-10T23:53:15.145734050Z" level=info msg="connecting to shim 681c546862868d789b7351220cbb5b3f86f47f0e75ef372ae7c0a5e5ff6eaf58" address="unix:///run/containerd/s/6868b25d2cd29a02ec8d5f0e52f9faf8e15cab2f278fc05a94238e4c987d6cba" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:15.170307 systemd[1]: Started cri-containerd-681c546862868d789b7351220cbb5b3f86f47f0e75ef372ae7c0a5e5ff6eaf58.scope - libcontainer container 681c546862868d789b7351220cbb5b3f86f47f0e75ef372ae7c0a5e5ff6eaf58. Sep 10 23:53:15.190788 containerd[1529]: time="2025-09-10T23:53:15.190754384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ktgng,Uid:6ff93218-48c7-49c2-9179-e91cbbc88941,Namespace:kube-system,Attempt:0,} returns sandbox id \"681c546862868d789b7351220cbb5b3f86f47f0e75ef372ae7c0a5e5ff6eaf58\"" Sep 10 23:53:15.191535 kubelet[2651]: E0910 23:53:15.191512 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:15.193633 containerd[1529]: time="2025-09-10T23:53:15.193599262Z" level=info msg="CreateContainer within sandbox \"681c546862868d789b7351220cbb5b3f86f47f0e75ef372ae7c0a5e5ff6eaf58\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:53:15.203362 containerd[1529]: time="2025-09-10T23:53:15.203333355Z" level=info msg="Container d0fe7e2ed078773d2244c5faf1362329fe934c3da92cf2e3cffafa3614c2e855: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:15.209715 containerd[1529]: time="2025-09-10T23:53:15.209669881Z" level=info msg="CreateContainer within sandbox \"681c546862868d789b7351220cbb5b3f86f47f0e75ef372ae7c0a5e5ff6eaf58\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d0fe7e2ed078773d2244c5faf1362329fe934c3da92cf2e3cffafa3614c2e855\"" Sep 10 23:53:15.210273 containerd[1529]: time="2025-09-10T23:53:15.210248569Z" level=info msg="StartContainer for \"d0fe7e2ed078773d2244c5faf1362329fe934c3da92cf2e3cffafa3614c2e855\"" Sep 10 23:53:15.211849 containerd[1529]: time="2025-09-10T23:53:15.211741709Z" level=info msg="connecting to shim d0fe7e2ed078773d2244c5faf1362329fe934c3da92cf2e3cffafa3614c2e855" address="unix:///run/containerd/s/6868b25d2cd29a02ec8d5f0e52f9faf8e15cab2f278fc05a94238e4c987d6cba" protocol=ttrpc version=3 Sep 10 23:53:15.231301 systemd[1]: Started cri-containerd-d0fe7e2ed078773d2244c5faf1362329fe934c3da92cf2e3cffafa3614c2e855.scope - libcontainer container d0fe7e2ed078773d2244c5faf1362329fe934c3da92cf2e3cffafa3614c2e855. Sep 10 23:53:15.238725 containerd[1529]: time="2025-09-10T23:53:15.238688596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-svdct,Uid:d51592f1-611e-44b7-bed5-5db4dd39fce9,Namespace:tigera-operator,Attempt:0,}" Sep 10 23:53:15.257528 containerd[1529]: time="2025-09-10T23:53:15.257440492Z" level=info msg="connecting to shim 64242e05fb584aa9c445316bac9ca2040379153754fc14557be133fa4d852429" address="unix:///run/containerd/s/54ae66f8b718aac7fac351011a58884a39a9474865e17b32d264c58a51d222be" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:15.273381 containerd[1529]: time="2025-09-10T23:53:15.273033384Z" level=info msg="StartContainer for \"d0fe7e2ed078773d2244c5faf1362329fe934c3da92cf2e3cffafa3614c2e855\" returns successfully" Sep 10 23:53:15.285313 systemd[1]: Started cri-containerd-64242e05fb584aa9c445316bac9ca2040379153754fc14557be133fa4d852429.scope - libcontainer container 64242e05fb584aa9c445316bac9ca2040379153754fc14557be133fa4d852429. Sep 10 23:53:15.320562 containerd[1529]: time="2025-09-10T23:53:15.320464710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-svdct,Uid:d51592f1-611e-44b7-bed5-5db4dd39fce9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"64242e05fb584aa9c445316bac9ca2040379153754fc14557be133fa4d852429\"" Sep 10 23:53:15.322343 containerd[1529]: time="2025-09-10T23:53:15.322318696Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 23:53:15.842041 kubelet[2651]: E0910 23:53:15.842000 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:15.967973 kubelet[2651]: E0910 23:53:15.967945 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:15.969688 kubelet[2651]: E0910 23:53:15.969656 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:16.285035 kubelet[2651]: E0910 23:53:16.284926 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:16.299862 kubelet[2651]: I0910 23:53:16.299564 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ktgng" podStartSLOduration=2.299547993 podStartE2EDuration="2.299547993s" podCreationTimestamp="2025-09-10 23:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:53:15.984609597 +0000 UTC m=+8.126586839" watchObservedRunningTime="2025-09-10 23:53:16.299547993 +0000 UTC m=+8.441525235" Sep 10 23:53:16.500015 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1926884035.mount: Deactivated successfully. Sep 10 23:53:16.970408 kubelet[2651]: E0910 23:53:16.970207 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:17.972231 kubelet[2651]: E0910 23:53:17.972202 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:18.865176 containerd[1529]: time="2025-09-10T23:53:18.864795897Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:18.865785 containerd[1529]: time="2025-09-10T23:53:18.865763468Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 23:53:18.866640 containerd[1529]: time="2025-09-10T23:53:18.866611237Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:18.868624 containerd[1529]: time="2025-09-10T23:53:18.868578380Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:18.869245 containerd[1529]: time="2025-09-10T23:53:18.869217747Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 3.546004879s" Sep 10 23:53:18.869289 containerd[1529]: time="2025-09-10T23:53:18.869252067Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 23:53:18.873170 containerd[1529]: time="2025-09-10T23:53:18.872860028Z" level=info msg="CreateContainer within sandbox \"64242e05fb584aa9c445316bac9ca2040379153754fc14557be133fa4d852429\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 23:53:18.879292 containerd[1529]: time="2025-09-10T23:53:18.879266820Z" level=info msg="Container 0d4aedc8b3f70b59b720a87297eedefcd5d07c082ac46cf509d51c3757526f37: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:18.883904 containerd[1529]: time="2025-09-10T23:53:18.883876951Z" level=info msg="CreateContainer within sandbox \"64242e05fb584aa9c445316bac9ca2040379153754fc14557be133fa4d852429\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"0d4aedc8b3f70b59b720a87297eedefcd5d07c082ac46cf509d51c3757526f37\"" Sep 10 23:53:18.884365 containerd[1529]: time="2025-09-10T23:53:18.884344037Z" level=info msg="StartContainer for \"0d4aedc8b3f70b59b720a87297eedefcd5d07c082ac46cf509d51c3757526f37\"" Sep 10 23:53:18.885317 containerd[1529]: time="2025-09-10T23:53:18.885280127Z" level=info msg="connecting to shim 0d4aedc8b3f70b59b720a87297eedefcd5d07c082ac46cf509d51c3757526f37" address="unix:///run/containerd/s/54ae66f8b718aac7fac351011a58884a39a9474865e17b32d264c58a51d222be" protocol=ttrpc version=3 Sep 10 23:53:18.907304 systemd[1]: Started cri-containerd-0d4aedc8b3f70b59b720a87297eedefcd5d07c082ac46cf509d51c3757526f37.scope - libcontainer container 0d4aedc8b3f70b59b720a87297eedefcd5d07c082ac46cf509d51c3757526f37. Sep 10 23:53:18.930299 containerd[1529]: time="2025-09-10T23:53:18.930207191Z" level=info msg="StartContainer for \"0d4aedc8b3f70b59b720a87297eedefcd5d07c082ac46cf509d51c3757526f37\" returns successfully" Sep 10 23:53:18.985470 kubelet[2651]: I0910 23:53:18.985418 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-svdct" podStartSLOduration=1.436134091 podStartE2EDuration="4.985401771s" podCreationTimestamp="2025-09-10 23:53:14 +0000 UTC" firstStartedPulling="2025-09-10 23:53:15.321481124 +0000 UTC m=+7.463458366" lastFinishedPulling="2025-09-10 23:53:18.870748804 +0000 UTC m=+11.012726046" observedRunningTime="2025-09-10 23:53:18.984953366 +0000 UTC m=+11.126930608" watchObservedRunningTime="2025-09-10 23:53:18.985401771 +0000 UTC m=+11.127379013" Sep 10 23:53:21.370318 kubelet[2651]: E0910 23:53:21.369208 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:21.978622 kubelet[2651]: E0910 23:53:21.978510 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:24.103470 sudo[1735]: pam_unix(sudo:session): session closed for user root Sep 10 23:53:24.105641 sshd[1734]: Connection closed by 10.0.0.1 port 54564 Sep 10 23:53:24.106090 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Sep 10 23:53:24.110015 systemd[1]: sshd@6-10.0.0.82:22-10.0.0.1:54564.service: Deactivated successfully. Sep 10 23:53:24.113072 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:53:24.113440 systemd[1]: session-7.scope: Consumed 6.732s CPU time, 227.4M memory peak. Sep 10 23:53:24.115182 systemd-logind[1514]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:53:24.117741 systemd-logind[1514]: Removed session 7. Sep 10 23:53:24.506721 update_engine[1518]: I20250910 23:53:24.506583 1518 update_attempter.cc:509] Updating boot flags... Sep 10 23:53:27.553533 systemd[1]: Created slice kubepods-besteffort-podb6c33e64_7e8f_4fb3_983f_33d17eb15cf3.slice - libcontainer container kubepods-besteffort-podb6c33e64_7e8f_4fb3_983f_33d17eb15cf3.slice. Sep 10 23:53:27.581029 kubelet[2651]: I0910 23:53:27.580980 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c33e64-7e8f-4fb3-983f-33d17eb15cf3-tigera-ca-bundle\") pod \"calico-typha-574968fcbc-79t4d\" (UID: \"b6c33e64-7e8f-4fb3-983f-33d17eb15cf3\") " pod="calico-system/calico-typha-574968fcbc-79t4d" Sep 10 23:53:27.581029 kubelet[2651]: I0910 23:53:27.581030 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbwj9\" (UniqueName: \"kubernetes.io/projected/b6c33e64-7e8f-4fb3-983f-33d17eb15cf3-kube-api-access-rbwj9\") pod \"calico-typha-574968fcbc-79t4d\" (UID: \"b6c33e64-7e8f-4fb3-983f-33d17eb15cf3\") " pod="calico-system/calico-typha-574968fcbc-79t4d" Sep 10 23:53:27.581432 kubelet[2651]: I0910 23:53:27.581061 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b6c33e64-7e8f-4fb3-983f-33d17eb15cf3-typha-certs\") pod \"calico-typha-574968fcbc-79t4d\" (UID: \"b6c33e64-7e8f-4fb3-983f-33d17eb15cf3\") " pod="calico-system/calico-typha-574968fcbc-79t4d" Sep 10 23:53:27.861281 kubelet[2651]: E0910 23:53:27.861229 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:27.863390 containerd[1529]: time="2025-09-10T23:53:27.863345096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574968fcbc-79t4d,Uid:b6c33e64-7e8f-4fb3-983f-33d17eb15cf3,Namespace:calico-system,Attempt:0,}" Sep 10 23:53:27.888898 systemd[1]: Created slice kubepods-besteffort-pode5e8227d_37d8_4e34_989e_b343281ce518.slice - libcontainer container kubepods-besteffort-pode5e8227d_37d8_4e34_989e_b343281ce518.slice. Sep 10 23:53:27.915623 containerd[1529]: time="2025-09-10T23:53:27.914359057Z" level=info msg="connecting to shim fe75679a0575428e73f113e45116484f16004bba03c3aaa5289d285cecd91768" address="unix:///run/containerd/s/61f1db51872d3900fd722babff4c23d1a48d871d971b32f2b4ae037bbb70d70f" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:27.966284 systemd[1]: Started cri-containerd-fe75679a0575428e73f113e45116484f16004bba03c3aaa5289d285cecd91768.scope - libcontainer container fe75679a0575428e73f113e45116484f16004bba03c3aaa5289d285cecd91768. Sep 10 23:53:27.986149 kubelet[2651]: I0910 23:53:27.984640 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e5e8227d-37d8-4e34-989e-b343281ce518-cni-net-dir\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:27.986149 kubelet[2651]: I0910 23:53:27.984682 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e5e8227d-37d8-4e34-989e-b343281ce518-node-certs\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:27.986149 kubelet[2651]: I0910 23:53:27.984702 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzkws\" (UniqueName: \"kubernetes.io/projected/e5e8227d-37d8-4e34-989e-b343281ce518-kube-api-access-xzkws\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:27.986149 kubelet[2651]: I0910 23:53:27.984724 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e5e8227d-37d8-4e34-989e-b343281ce518-cni-bin-dir\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:27.986149 kubelet[2651]: I0910 23:53:27.984738 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e5e8227d-37d8-4e34-989e-b343281ce518-lib-modules\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:27.986377 kubelet[2651]: I0910 23:53:27.984753 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5e8227d-37d8-4e34-989e-b343281ce518-xtables-lock\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:27.986377 kubelet[2651]: I0910 23:53:27.984767 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e5e8227d-37d8-4e34-989e-b343281ce518-flexvol-driver-host\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:27.986377 kubelet[2651]: I0910 23:53:27.984782 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e5e8227d-37d8-4e34-989e-b343281ce518-var-lib-calico\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:27.986377 kubelet[2651]: I0910 23:53:27.984795 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e5e8227d-37d8-4e34-989e-b343281ce518-var-run-calico\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:27.986377 kubelet[2651]: I0910 23:53:27.984810 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e5e8227d-37d8-4e34-989e-b343281ce518-cni-log-dir\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:27.986479 kubelet[2651]: I0910 23:53:27.984828 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e5e8227d-37d8-4e34-989e-b343281ce518-policysync\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:27.986479 kubelet[2651]: I0910 23:53:27.984878 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5e8227d-37d8-4e34-989e-b343281ce518-tigera-ca-bundle\") pod \"calico-node-snpjh\" (UID: \"e5e8227d-37d8-4e34-989e-b343281ce518\") " pod="calico-system/calico-node-snpjh" Sep 10 23:53:28.004723 containerd[1529]: time="2025-09-10T23:53:28.004564622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-574968fcbc-79t4d,Uid:b6c33e64-7e8f-4fb3-983f-33d17eb15cf3,Namespace:calico-system,Attempt:0,} returns sandbox id \"fe75679a0575428e73f113e45116484f16004bba03c3aaa5289d285cecd91768\"" Sep 10 23:53:28.008427 kubelet[2651]: E0910 23:53:28.008403 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:28.009953 containerd[1529]: time="2025-09-10T23:53:28.009923493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 23:53:28.095758 kubelet[2651]: E0910 23:53:28.095734 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.095954 kubelet[2651]: W0910 23:53:28.095856 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.096659 kubelet[2651]: E0910 23:53:28.096609 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.100984 kubelet[2651]: E0910 23:53:28.100963 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.101150 kubelet[2651]: W0910 23:53:28.101099 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.101150 kubelet[2651]: E0910 23:53:28.101124 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.105738 kubelet[2651]: E0910 23:53:28.105709 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.105738 kubelet[2651]: W0910 23:53:28.105728 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.105738 kubelet[2651]: E0910 23:53:28.105744 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.124758 kubelet[2651]: E0910 23:53:28.124643 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xvkpt" podUID="62bd140d-cac0-41ac-b0b4-d6e71703a322" Sep 10 23:53:28.169645 kubelet[2651]: E0910 23:53:28.169591 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.169645 kubelet[2651]: W0910 23:53:28.169627 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.169645 kubelet[2651]: E0910 23:53:28.169647 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.169854 kubelet[2651]: E0910 23:53:28.169829 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.169888 kubelet[2651]: W0910 23:53:28.169844 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.169911 kubelet[2651]: E0910 23:53:28.169889 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.170081 kubelet[2651]: E0910 23:53:28.170062 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.170081 kubelet[2651]: W0910 23:53:28.170073 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.170081 kubelet[2651]: E0910 23:53:28.170081 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.170270 kubelet[2651]: E0910 23:53:28.170249 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.170270 kubelet[2651]: W0910 23:53:28.170260 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.170328 kubelet[2651]: E0910 23:53:28.170273 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.170429 kubelet[2651]: E0910 23:53:28.170413 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.170468 kubelet[2651]: W0910 23:53:28.170422 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.170468 kubelet[2651]: E0910 23:53:28.170440 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.170567 kubelet[2651]: E0910 23:53:28.170551 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.170567 kubelet[2651]: W0910 23:53:28.170560 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.170567 kubelet[2651]: E0910 23:53:28.170568 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.170703 kubelet[2651]: E0910 23:53:28.170685 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.170703 kubelet[2651]: W0910 23:53:28.170698 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.170751 kubelet[2651]: E0910 23:53:28.170713 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.170847 kubelet[2651]: E0910 23:53:28.170830 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.170847 kubelet[2651]: W0910 23:53:28.170841 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.170901 kubelet[2651]: E0910 23:53:28.170850 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.170999 kubelet[2651]: E0910 23:53:28.170983 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.170999 kubelet[2651]: W0910 23:53:28.170994 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.171061 kubelet[2651]: E0910 23:53:28.171010 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.171168 kubelet[2651]: E0910 23:53:28.171155 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.171168 kubelet[2651]: W0910 23:53:28.171165 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.171229 kubelet[2651]: E0910 23:53:28.171173 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.171323 kubelet[2651]: E0910 23:53:28.171297 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.171323 kubelet[2651]: W0910 23:53:28.171315 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.171323 kubelet[2651]: E0910 23:53:28.171322 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.171466 kubelet[2651]: E0910 23:53:28.171448 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.171466 kubelet[2651]: W0910 23:53:28.171465 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.171521 kubelet[2651]: E0910 23:53:28.171473 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.171642 kubelet[2651]: E0910 23:53:28.171623 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.171642 kubelet[2651]: W0910 23:53:28.171634 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.171642 kubelet[2651]: E0910 23:53:28.171642 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.171782 kubelet[2651]: E0910 23:53:28.171758 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.171782 kubelet[2651]: W0910 23:53:28.171776 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.171782 kubelet[2651]: E0910 23:53:28.171784 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.171956 kubelet[2651]: E0910 23:53:28.171934 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.171956 kubelet[2651]: W0910 23:53:28.171944 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.171956 kubelet[2651]: E0910 23:53:28.171951 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.172151 kubelet[2651]: E0910 23:53:28.172123 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.172369 kubelet[2651]: W0910 23:53:28.172135 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.172405 kubelet[2651]: E0910 23:53:28.172370 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.172682 kubelet[2651]: E0910 23:53:28.172631 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.172682 kubelet[2651]: W0910 23:53:28.172675 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.172743 kubelet[2651]: E0910 23:53:28.172688 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.173204 kubelet[2651]: E0910 23:53:28.172951 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.173204 kubelet[2651]: W0910 23:53:28.173155 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.173204 kubelet[2651]: E0910 23:53:28.173171 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.173818 kubelet[2651]: E0910 23:53:28.173789 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.173818 kubelet[2651]: W0910 23:53:28.173806 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.173818 kubelet[2651]: E0910 23:53:28.173821 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.175252 kubelet[2651]: E0910 23:53:28.175222 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.175252 kubelet[2651]: W0910 23:53:28.175238 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.175252 kubelet[2651]: E0910 23:53:28.175251 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.186665 kubelet[2651]: E0910 23:53:28.186643 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.186665 kubelet[2651]: W0910 23:53:28.186664 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.186735 kubelet[2651]: E0910 23:53:28.186684 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.186735 kubelet[2651]: I0910 23:53:28.186715 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/62bd140d-cac0-41ac-b0b4-d6e71703a322-kubelet-dir\") pod \"csi-node-driver-xvkpt\" (UID: \"62bd140d-cac0-41ac-b0b4-d6e71703a322\") " pod="calico-system/csi-node-driver-xvkpt" Sep 10 23:53:28.186879 kubelet[2651]: E0910 23:53:28.186868 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.186907 kubelet[2651]: W0910 23:53:28.186879 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.186907 kubelet[2651]: E0910 23:53:28.186894 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.186966 kubelet[2651]: I0910 23:53:28.186910 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/62bd140d-cac0-41ac-b0b4-d6e71703a322-registration-dir\") pod \"csi-node-driver-xvkpt\" (UID: \"62bd140d-cac0-41ac-b0b4-d6e71703a322\") " pod="calico-system/csi-node-driver-xvkpt" Sep 10 23:53:28.187070 kubelet[2651]: E0910 23:53:28.187055 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.187105 kubelet[2651]: W0910 23:53:28.187070 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.187105 kubelet[2651]: E0910 23:53:28.187086 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.187105 kubelet[2651]: I0910 23:53:28.187101 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/62bd140d-cac0-41ac-b0b4-d6e71703a322-socket-dir\") pod \"csi-node-driver-xvkpt\" (UID: \"62bd140d-cac0-41ac-b0b4-d6e71703a322\") " pod="calico-system/csi-node-driver-xvkpt" Sep 10 23:53:28.187256 kubelet[2651]: E0910 23:53:28.187242 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.187256 kubelet[2651]: W0910 23:53:28.187254 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.187309 kubelet[2651]: E0910 23:53:28.187268 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.187309 kubelet[2651]: I0910 23:53:28.187282 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5pmc\" (UniqueName: \"kubernetes.io/projected/62bd140d-cac0-41ac-b0b4-d6e71703a322-kube-api-access-z5pmc\") pod \"csi-node-driver-xvkpt\" (UID: \"62bd140d-cac0-41ac-b0b4-d6e71703a322\") " pod="calico-system/csi-node-driver-xvkpt" Sep 10 23:53:28.187478 kubelet[2651]: E0910 23:53:28.187464 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.187478 kubelet[2651]: W0910 23:53:28.187477 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.187542 kubelet[2651]: E0910 23:53:28.187492 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.187542 kubelet[2651]: I0910 23:53:28.187506 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/62bd140d-cac0-41ac-b0b4-d6e71703a322-varrun\") pod \"csi-node-driver-xvkpt\" (UID: \"62bd140d-cac0-41ac-b0b4-d6e71703a322\") " pod="calico-system/csi-node-driver-xvkpt" Sep 10 23:53:28.187672 kubelet[2651]: E0910 23:53:28.187659 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.187672 kubelet[2651]: W0910 23:53:28.187670 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.187745 kubelet[2651]: E0910 23:53:28.187683 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.187815 kubelet[2651]: E0910 23:53:28.187804 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.187815 kubelet[2651]: W0910 23:53:28.187813 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.187872 kubelet[2651]: E0910 23:53:28.187826 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.188063 kubelet[2651]: E0910 23:53:28.188038 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.188063 kubelet[2651]: W0910 23:53:28.188061 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.188126 kubelet[2651]: E0910 23:53:28.188094 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.188273 kubelet[2651]: E0910 23:53:28.188260 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.188273 kubelet[2651]: W0910 23:53:28.188271 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.188394 kubelet[2651]: E0910 23:53:28.188304 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.188436 kubelet[2651]: E0910 23:53:28.188422 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.188436 kubelet[2651]: W0910 23:53:28.188433 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.188587 kubelet[2651]: E0910 23:53:28.188487 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.188654 kubelet[2651]: E0910 23:53:28.188595 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.188654 kubelet[2651]: W0910 23:53:28.188604 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.188654 kubelet[2651]: E0910 23:53:28.188637 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.188745 kubelet[2651]: E0910 23:53:28.188732 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.188745 kubelet[2651]: W0910 23:53:28.188741 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.188797 kubelet[2651]: E0910 23:53:28.188784 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.188868 kubelet[2651]: E0910 23:53:28.188857 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.188868 kubelet[2651]: W0910 23:53:28.188866 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.188928 kubelet[2651]: E0910 23:53:28.188875 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.189024 kubelet[2651]: E0910 23:53:28.189012 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.189024 kubelet[2651]: W0910 23:53:28.189023 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.189091 kubelet[2651]: E0910 23:53:28.189031 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.189191 kubelet[2651]: E0910 23:53:28.189181 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.189223 kubelet[2651]: W0910 23:53:28.189191 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.189223 kubelet[2651]: E0910 23:53:28.189200 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.195021 containerd[1529]: time="2025-09-10T23:53:28.194987343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-snpjh,Uid:e5e8227d-37d8-4e34-989e-b343281ce518,Namespace:calico-system,Attempt:0,}" Sep 10 23:53:28.233116 containerd[1529]: time="2025-09-10T23:53:28.233023727Z" level=info msg="connecting to shim 552ee1bc66a707b642b8a9a2a68e093623f1460455e8524f2a22813f1da83c1f" address="unix:///run/containerd/s/ccc974f200d232f8f5c58046a3c1b7454806922218d76451296c9923ce4b5ec3" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:28.255337 systemd[1]: Started cri-containerd-552ee1bc66a707b642b8a9a2a68e093623f1460455e8524f2a22813f1da83c1f.scope - libcontainer container 552ee1bc66a707b642b8a9a2a68e093623f1460455e8524f2a22813f1da83c1f. Sep 10 23:53:28.288711 kubelet[2651]: E0910 23:53:28.288681 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.288711 kubelet[2651]: W0910 23:53:28.288705 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.288880 kubelet[2651]: E0910 23:53:28.288738 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.288937 kubelet[2651]: E0910 23:53:28.288923 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.288937 kubelet[2651]: W0910 23:53:28.288935 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.289119 kubelet[2651]: E0910 23:53:28.288956 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.289351 kubelet[2651]: E0910 23:53:28.289336 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.289427 kubelet[2651]: W0910 23:53:28.289414 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.289493 kubelet[2651]: E0910 23:53:28.289483 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.289718 kubelet[2651]: E0910 23:53:28.289704 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.289898 kubelet[2651]: W0910 23:53:28.289780 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.289898 kubelet[2651]: E0910 23:53:28.289802 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.290022 kubelet[2651]: E0910 23:53:28.290011 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.290090 kubelet[2651]: W0910 23:53:28.290078 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.290168 kubelet[2651]: E0910 23:53:28.290156 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.290556 kubelet[2651]: E0910 23:53:28.290539 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.290609 kubelet[2651]: W0910 23:53:28.290555 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.290609 kubelet[2651]: E0910 23:53:28.290601 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.290843 kubelet[2651]: E0910 23:53:28.290808 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.290843 kubelet[2651]: W0910 23:53:28.290823 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.291045 kubelet[2651]: E0910 23:53:28.290868 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.291045 kubelet[2651]: E0910 23:53:28.291059 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.291045 kubelet[2651]: W0910 23:53:28.291069 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.291045 kubelet[2651]: E0910 23:53:28.291104 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.296598 kubelet[2651]: E0910 23:53:28.296572 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.296598 kubelet[2651]: W0910 23:53:28.296594 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.297097 kubelet[2651]: E0910 23:53:28.296616 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.297097 kubelet[2651]: E0910 23:53:28.296882 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.297097 kubelet[2651]: W0910 23:53:28.296893 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.297097 kubelet[2651]: E0910 23:53:28.296910 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.297097 kubelet[2651]: E0910 23:53:28.297100 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.297355 kubelet[2651]: W0910 23:53:28.297133 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.297355 kubelet[2651]: E0910 23:53:28.297182 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.297687 kubelet[2651]: E0910 23:53:28.297669 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.297687 kubelet[2651]: W0910 23:53:28.297685 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.297761 kubelet[2651]: E0910 23:53:28.297712 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.298133 kubelet[2651]: E0910 23:53:28.297881 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.298133 kubelet[2651]: W0910 23:53:28.297894 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.298133 kubelet[2651]: E0910 23:53:28.297932 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.298133 kubelet[2651]: E0910 23:53:28.298068 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.298133 kubelet[2651]: W0910 23:53:28.298076 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.298133 kubelet[2651]: E0910 23:53:28.298115 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.298341 kubelet[2651]: E0910 23:53:28.298269 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.298341 kubelet[2651]: W0910 23:53:28.298279 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.298341 kubelet[2651]: E0910 23:53:28.298295 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.298541 kubelet[2651]: E0910 23:53:28.298528 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.298541 kubelet[2651]: W0910 23:53:28.298540 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.298589 kubelet[2651]: E0910 23:53:28.298554 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.298740 kubelet[2651]: E0910 23:53:28.298729 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.298740 kubelet[2651]: W0910 23:53:28.298740 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.298796 kubelet[2651]: E0910 23:53:28.298767 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.298911 kubelet[2651]: E0910 23:53:28.298898 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.298911 kubelet[2651]: W0910 23:53:28.298909 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.298956 kubelet[2651]: E0910 23:53:28.298929 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.299576 kubelet[2651]: E0910 23:53:28.299537 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.299576 kubelet[2651]: W0910 23:53:28.299552 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.299787 kubelet[2651]: E0910 23:53:28.299762 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.299844 kubelet[2651]: E0910 23:53:28.299804 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.299844 kubelet[2651]: W0910 23:53:28.299814 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.299930 kubelet[2651]: E0910 23:53:28.299888 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.299966 kubelet[2651]: E0910 23:53:28.299953 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.299966 kubelet[2651]: W0910 23:53:28.299963 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.300106 kubelet[2651]: E0910 23:53:28.300012 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.300315 kubelet[2651]: E0910 23:53:28.300290 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.300315 kubelet[2651]: W0910 23:53:28.300306 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.300367 kubelet[2651]: E0910 23:53:28.300323 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.300628 kubelet[2651]: E0910 23:53:28.300592 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.300658 kubelet[2651]: W0910 23:53:28.300628 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.300658 kubelet[2651]: E0910 23:53:28.300648 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.303422 kubelet[2651]: E0910 23:53:28.301208 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.303422 kubelet[2651]: W0910 23:53:28.301227 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.303422 kubelet[2651]: E0910 23:53:28.301248 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.303422 kubelet[2651]: E0910 23:53:28.302017 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.303422 kubelet[2651]: W0910 23:53:28.302030 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.303422 kubelet[2651]: E0910 23:53:28.302045 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.311283 kubelet[2651]: E0910 23:53:28.311252 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:28.311283 kubelet[2651]: W0910 23:53:28.311272 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:28.311283 kubelet[2651]: E0910 23:53:28.311289 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:28.332926 containerd[1529]: time="2025-09-10T23:53:28.332818074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-snpjh,Uid:e5e8227d-37d8-4e34-989e-b343281ce518,Namespace:calico-system,Attempt:0,} returns sandbox id \"552ee1bc66a707b642b8a9a2a68e093623f1460455e8524f2a22813f1da83c1f\"" Sep 10 23:53:29.001220 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2559315916.mount: Deactivated successfully. Sep 10 23:53:29.851734 containerd[1529]: time="2025-09-10T23:53:29.851690262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:29.852579 containerd[1529]: time="2025-09-10T23:53:29.852548106Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 10 23:53:29.853314 containerd[1529]: time="2025-09-10T23:53:29.853251550Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:29.855563 containerd[1529]: time="2025-09-10T23:53:29.855536523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:29.856457 containerd[1529]: time="2025-09-10T23:53:29.856202407Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.846242033s" Sep 10 23:53:29.856457 containerd[1529]: time="2025-09-10T23:53:29.856241007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 23:53:29.860382 containerd[1529]: time="2025-09-10T23:53:29.859668426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 23:53:29.875286 containerd[1529]: time="2025-09-10T23:53:29.875242632Z" level=info msg="CreateContainer within sandbox \"fe75679a0575428e73f113e45116484f16004bba03c3aaa5289d285cecd91768\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 23:53:29.891471 containerd[1529]: time="2025-09-10T23:53:29.891415841Z" level=info msg="Container 0c9ab6835b894644f1504c0d7381d26620c263a0b9c19897792b0604d6e0702f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:29.899794 containerd[1529]: time="2025-09-10T23:53:29.899756127Z" level=info msg="CreateContainer within sandbox \"fe75679a0575428e73f113e45116484f16004bba03c3aaa5289d285cecd91768\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0c9ab6835b894644f1504c0d7381d26620c263a0b9c19897792b0604d6e0702f\"" Sep 10 23:53:29.900147 containerd[1529]: time="2025-09-10T23:53:29.900104529Z" level=info msg="StartContainer for \"0c9ab6835b894644f1504c0d7381d26620c263a0b9c19897792b0604d6e0702f\"" Sep 10 23:53:29.901098 containerd[1529]: time="2025-09-10T23:53:29.901072894Z" level=info msg="connecting to shim 0c9ab6835b894644f1504c0d7381d26620c263a0b9c19897792b0604d6e0702f" address="unix:///run/containerd/s/61f1db51872d3900fd722babff4c23d1a48d871d971b32f2b4ae037bbb70d70f" protocol=ttrpc version=3 Sep 10 23:53:29.920336 systemd[1]: Started cri-containerd-0c9ab6835b894644f1504c0d7381d26620c263a0b9c19897792b0604d6e0702f.scope - libcontainer container 0c9ab6835b894644f1504c0d7381d26620c263a0b9c19897792b0604d6e0702f. Sep 10 23:53:29.945453 kubelet[2651]: E0910 23:53:29.945389 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xvkpt" podUID="62bd140d-cac0-41ac-b0b4-d6e71703a322" Sep 10 23:53:29.981754 containerd[1529]: time="2025-09-10T23:53:29.981717899Z" level=info msg="StartContainer for \"0c9ab6835b894644f1504c0d7381d26620c263a0b9c19897792b0604d6e0702f\" returns successfully" Sep 10 23:53:30.002685 kubelet[2651]: E0910 23:53:30.002658 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:30.089149 kubelet[2651]: E0910 23:53:30.089108 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.089149 kubelet[2651]: W0910 23:53:30.089131 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.089317 kubelet[2651]: E0910 23:53:30.089164 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.089359 kubelet[2651]: E0910 23:53:30.089342 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.089403 kubelet[2651]: W0910 23:53:30.089356 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.090819 kubelet[2651]: E0910 23:53:30.090789 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.091008 kubelet[2651]: E0910 23:53:30.090995 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.091008 kubelet[2651]: W0910 23:53:30.091007 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.091091 kubelet[2651]: E0910 23:53:30.091018 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.091180 kubelet[2651]: E0910 23:53:30.091168 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.091180 kubelet[2651]: W0910 23:53:30.091178 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.091237 kubelet[2651]: E0910 23:53:30.091188 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.091392 kubelet[2651]: E0910 23:53:30.091367 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.091392 kubelet[2651]: W0910 23:53:30.091387 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.091473 kubelet[2651]: E0910 23:53:30.091397 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.091540 kubelet[2651]: E0910 23:53:30.091528 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.091540 kubelet[2651]: W0910 23:53:30.091538 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.091600 kubelet[2651]: E0910 23:53:30.091546 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.092168 kubelet[2651]: E0910 23:53:30.091664 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.092168 kubelet[2651]: W0910 23:53:30.091674 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.092168 kubelet[2651]: E0910 23:53:30.091682 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.092168 kubelet[2651]: E0910 23:53:30.091793 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.092168 kubelet[2651]: W0910 23:53:30.091800 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.092168 kubelet[2651]: E0910 23:53:30.091807 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.092168 kubelet[2651]: E0910 23:53:30.091941 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.092168 kubelet[2651]: W0910 23:53:30.091948 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.092168 kubelet[2651]: E0910 23:53:30.091957 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.092410 kubelet[2651]: E0910 23:53:30.092393 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.092410 kubelet[2651]: W0910 23:53:30.092408 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.092658 kubelet[2651]: E0910 23:53:30.092420 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.093555 kubelet[2651]: E0910 23:53:30.093535 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.093555 kubelet[2651]: W0910 23:53:30.093549 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.093649 kubelet[2651]: E0910 23:53:30.093573 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.094028 kubelet[2651]: E0910 23:53:30.094011 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.094028 kubelet[2651]: W0910 23:53:30.094024 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.094113 kubelet[2651]: E0910 23:53:30.094035 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.094841 kubelet[2651]: E0910 23:53:30.094806 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.094841 kubelet[2651]: W0910 23:53:30.094824 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.094841 kubelet[2651]: E0910 23:53:30.094836 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.095362 kubelet[2651]: E0910 23:53:30.095343 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.095362 kubelet[2651]: W0910 23:53:30.095357 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.095587 kubelet[2651]: E0910 23:53:30.095552 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.096032 kubelet[2651]: E0910 23:53:30.095929 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.096095 kubelet[2651]: W0910 23:53:30.095947 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.096095 kubelet[2651]: E0910 23:53:30.096091 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.110364 kubelet[2651]: E0910 23:53:30.110230 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.110364 kubelet[2651]: W0910 23:53:30.110254 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.110364 kubelet[2651]: E0910 23:53:30.110272 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.111337 kubelet[2651]: E0910 23:53:30.110964 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.111337 kubelet[2651]: W0910 23:53:30.110983 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.111337 kubelet[2651]: E0910 23:53:30.111001 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.111719 kubelet[2651]: E0910 23:53:30.111650 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.111719 kubelet[2651]: W0910 23:53:30.111670 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.111719 kubelet[2651]: E0910 23:53:30.111688 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.113384 kubelet[2651]: E0910 23:53:30.113352 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.113384 kubelet[2651]: W0910 23:53:30.113375 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.113496 kubelet[2651]: E0910 23:53:30.113427 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.113543 kubelet[2651]: E0910 23:53:30.113523 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.113543 kubelet[2651]: W0910 23:53:30.113538 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.113662 kubelet[2651]: E0910 23:53:30.113568 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.113760 kubelet[2651]: E0910 23:53:30.113741 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.113760 kubelet[2651]: W0910 23:53:30.113757 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.113821 kubelet[2651]: E0910 23:53:30.113790 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.113958 kubelet[2651]: E0910 23:53:30.113940 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.113958 kubelet[2651]: W0910 23:53:30.113955 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.114058 kubelet[2651]: E0910 23:53:30.113970 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.114216 kubelet[2651]: E0910 23:53:30.114200 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.114216 kubelet[2651]: W0910 23:53:30.114214 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.114286 kubelet[2651]: E0910 23:53:30.114224 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.114448 kubelet[2651]: E0910 23:53:30.114428 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.114448 kubelet[2651]: W0910 23:53:30.114444 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.114517 kubelet[2651]: E0910 23:53:30.114457 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.115174 kubelet[2651]: E0910 23:53:30.115037 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.115174 kubelet[2651]: W0910 23:53:30.115065 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.115174 kubelet[2651]: E0910 23:53:30.115085 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.115377 kubelet[2651]: E0910 23:53:30.115363 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.115438 kubelet[2651]: W0910 23:53:30.115428 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.115544 kubelet[2651]: E0910 23:53:30.115517 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.115819 kubelet[2651]: E0910 23:53:30.115802 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.115884 kubelet[2651]: W0910 23:53:30.115873 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.116000 kubelet[2651]: E0910 23:53:30.115973 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.117290 kubelet[2651]: E0910 23:53:30.117271 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.117367 kubelet[2651]: W0910 23:53:30.117353 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.117479 kubelet[2651]: E0910 23:53:30.117455 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.117810 kubelet[2651]: E0910 23:53:30.117717 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.117810 kubelet[2651]: W0910 23:53:30.117731 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.117810 kubelet[2651]: E0910 23:53:30.117765 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.118553 kubelet[2651]: E0910 23:53:30.118165 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.118703 kubelet[2651]: W0910 23:53:30.118646 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.118703 kubelet[2651]: E0910 23:53:30.118673 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.118886 kubelet[2651]: E0910 23:53:30.118861 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.118886 kubelet[2651]: W0910 23:53:30.118877 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.118950 kubelet[2651]: E0910 23:53:30.118893 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.119083 kubelet[2651]: E0910 23:53:30.119063 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.119083 kubelet[2651]: W0910 23:53:30.119076 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.119155 kubelet[2651]: E0910 23:53:30.119087 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:30.119380 kubelet[2651]: E0910 23:53:30.119359 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:30.119380 kubelet[2651]: W0910 23:53:30.119371 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:30.119380 kubelet[2651]: E0910 23:53:30.119381 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.004086 kubelet[2651]: I0910 23:53:31.004047 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:53:31.004912 kubelet[2651]: E0910 23:53:31.004790 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:31.075509 containerd[1529]: time="2025-09-10T23:53:31.075448660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:31.076121 containerd[1529]: time="2025-09-10T23:53:31.076092543Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 10 23:53:31.076765 containerd[1529]: time="2025-09-10T23:53:31.076738306Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:31.078485 containerd[1529]: time="2025-09-10T23:53:31.078454234Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:31.079228 containerd[1529]: time="2025-09-10T23:53:31.078948037Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.21712892s" Sep 10 23:53:31.079228 containerd[1529]: time="2025-09-10T23:53:31.078980117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 23:53:31.081014 containerd[1529]: time="2025-09-10T23:53:31.080988087Z" level=info msg="CreateContainer within sandbox \"552ee1bc66a707b642b8a9a2a68e093623f1460455e8524f2a22813f1da83c1f\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 23:53:31.090162 containerd[1529]: time="2025-09-10T23:53:31.089980130Z" level=info msg="Container 75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:31.102368 kubelet[2651]: E0910 23:53:31.102344 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.102590 kubelet[2651]: W0910 23:53:31.102515 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.102590 kubelet[2651]: E0910 23:53:31.102544 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.102832 kubelet[2651]: E0910 23:53:31.102819 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.102901 kubelet[2651]: W0910 23:53:31.102889 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.102963 kubelet[2651]: E0910 23:53:31.102952 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.103255 kubelet[2651]: E0910 23:53:31.103194 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.103255 kubelet[2651]: W0910 23:53:31.103207 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.103255 kubelet[2651]: E0910 23:53:31.103217 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.103580 kubelet[2651]: E0910 23:53:31.103510 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.103580 kubelet[2651]: W0910 23:53:31.103523 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.103580 kubelet[2651]: E0910 23:53:31.103534 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.103825 kubelet[2651]: E0910 23:53:31.103812 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.103889 kubelet[2651]: W0910 23:53:31.103879 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.103943 kubelet[2651]: E0910 23:53:31.103933 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.104215 kubelet[2651]: E0910 23:53:31.104157 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.104215 kubelet[2651]: W0910 23:53:31.104170 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.104215 kubelet[2651]: E0910 23:53:31.104180 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.104529 kubelet[2651]: E0910 23:53:31.104467 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.104529 kubelet[2651]: W0910 23:53:31.104480 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.104529 kubelet[2651]: E0910 23:53:31.104490 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.104821 kubelet[2651]: E0910 23:53:31.104750 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.104821 kubelet[2651]: W0910 23:53:31.104762 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.104821 kubelet[2651]: E0910 23:53:31.104773 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.105123 kubelet[2651]: E0910 23:53:31.105063 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.105123 kubelet[2651]: W0910 23:53:31.105077 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.105123 kubelet[2651]: E0910 23:53:31.105088 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.105436 kubelet[2651]: E0910 23:53:31.105422 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.105436 kubelet[2651]: W0910 23:53:31.105461 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.105436 kubelet[2651]: E0910 23:53:31.105475 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.105719 kubelet[2651]: E0910 23:53:31.105709 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.105775 kubelet[2651]: W0910 23:53:31.105765 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.105828 kubelet[2651]: E0910 23:53:31.105818 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.106107 kubelet[2651]: E0910 23:53:31.106030 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.106107 kubelet[2651]: W0910 23:53:31.106042 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.106107 kubelet[2651]: E0910 23:53:31.106061 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.106477 kubelet[2651]: E0910 23:53:31.106374 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.106477 kubelet[2651]: W0910 23:53:31.106387 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.106477 kubelet[2651]: E0910 23:53:31.106397 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.106688 kubelet[2651]: E0910 23:53:31.106624 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.106688 kubelet[2651]: W0910 23:53:31.106634 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.106688 kubelet[2651]: E0910 23:53:31.106644 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.106999 kubelet[2651]: E0910 23:53:31.106923 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.106999 kubelet[2651]: W0910 23:53:31.106935 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.106999 kubelet[2651]: E0910 23:53:31.106946 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.107353 containerd[1529]: time="2025-09-10T23:53:31.107305894Z" level=info msg="CreateContainer within sandbox \"552ee1bc66a707b642b8a9a2a68e093623f1460455e8524f2a22813f1da83c1f\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f\"" Sep 10 23:53:31.108516 containerd[1529]: time="2025-09-10T23:53:31.108491900Z" level=info msg="StartContainer for \"75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f\"" Sep 10 23:53:31.110089 containerd[1529]: time="2025-09-10T23:53:31.110016707Z" level=info msg="connecting to shim 75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f" address="unix:///run/containerd/s/ccc974f200d232f8f5c58046a3c1b7454806922218d76451296c9923ce4b5ec3" protocol=ttrpc version=3 Sep 10 23:53:31.119017 kubelet[2651]: E0910 23:53:31.118997 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.119017 kubelet[2651]: W0910 23:53:31.119015 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.119221 kubelet[2651]: E0910 23:53:31.119028 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.119517 kubelet[2651]: E0910 23:53:31.119503 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.119517 kubelet[2651]: W0910 23:53:31.119516 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.119578 kubelet[2651]: E0910 23:53:31.119533 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.119912 kubelet[2651]: E0910 23:53:31.119896 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.119912 kubelet[2651]: W0910 23:53:31.119910 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.119971 kubelet[2651]: E0910 23:53:31.119926 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.120562 kubelet[2651]: E0910 23:53:31.120544 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.120562 kubelet[2651]: W0910 23:53:31.120561 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.120635 kubelet[2651]: E0910 23:53:31.120582 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.121186 kubelet[2651]: E0910 23:53:31.121168 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.121186 kubelet[2651]: W0910 23:53:31.121184 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.121269 kubelet[2651]: E0910 23:53:31.121242 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.122335 kubelet[2651]: E0910 23:53:31.122316 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.122335 kubelet[2651]: W0910 23:53:31.122332 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.122537 kubelet[2651]: E0910 23:53:31.122364 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.122537 kubelet[2651]: E0910 23:53:31.122494 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.122537 kubelet[2651]: W0910 23:53:31.122503 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.122605 kubelet[2651]: E0910 23:53:31.122542 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.122661 kubelet[2651]: E0910 23:53:31.122646 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.122661 kubelet[2651]: W0910 23:53:31.122657 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.122782 kubelet[2651]: E0910 23:53:31.122691 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.122805 kubelet[2651]: E0910 23:53:31.122797 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.122825 kubelet[2651]: W0910 23:53:31.122806 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.122825 kubelet[2651]: E0910 23:53:31.122818 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.122969 kubelet[2651]: E0910 23:53:31.122952 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.122969 kubelet[2651]: W0910 23:53:31.122963 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.123031 kubelet[2651]: E0910 23:53:31.122977 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.123230 kubelet[2651]: E0910 23:53:31.123211 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.123275 kubelet[2651]: W0910 23:53:31.123230 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.123275 kubelet[2651]: E0910 23:53:31.123251 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.123552 kubelet[2651]: E0910 23:53:31.123533 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.123624 kubelet[2651]: W0910 23:53:31.123610 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.123750 kubelet[2651]: E0910 23:53:31.123683 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.123990 kubelet[2651]: E0910 23:53:31.123973 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.124068 kubelet[2651]: W0910 23:53:31.124045 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.124276 kubelet[2651]: E0910 23:53:31.124169 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.124406 kubelet[2651]: E0910 23:53:31.124390 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.124466 kubelet[2651]: W0910 23:53:31.124455 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.124643 kubelet[2651]: E0910 23:53:31.124563 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.124768 kubelet[2651]: E0910 23:53:31.124752 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.124827 kubelet[2651]: W0910 23:53:31.124816 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.124981 kubelet[2651]: E0910 23:53:31.124938 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.125113 kubelet[2651]: E0910 23:53:31.125093 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.125203 kubelet[2651]: W0910 23:53:31.125190 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.125351 kubelet[2651]: E0910 23:53:31.125326 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.127114 kubelet[2651]: E0910 23:53:31.127089 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.127114 kubelet[2651]: W0910 23:53:31.127114 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.128668 kubelet[2651]: E0910 23:53:31.127260 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.128668 kubelet[2651]: E0910 23:53:31.127589 2651 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:53:31.128668 kubelet[2651]: W0910 23:53:31.127607 2651 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:53:31.128668 kubelet[2651]: E0910 23:53:31.127625 2651 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:53:31.146340 systemd[1]: Started cri-containerd-75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f.scope - libcontainer container 75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f. Sep 10 23:53:31.190268 containerd[1529]: time="2025-09-10T23:53:31.190165176Z" level=info msg="StartContainer for \"75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f\" returns successfully" Sep 10 23:53:31.193934 systemd[1]: cri-containerd-75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f.scope: Deactivated successfully. Sep 10 23:53:31.214518 containerd[1529]: time="2025-09-10T23:53:31.214397174Z" level=info msg="received exit event container_id:\"75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f\" id:\"75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f\" pid:3386 exited_at:{seconds:1757548411 nanos:195988284}" Sep 10 23:53:31.221850 containerd[1529]: time="2025-09-10T23:53:31.221809930Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f\" id:\"75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f\" pid:3386 exited_at:{seconds:1757548411 nanos:195988284}" Sep 10 23:53:31.253168 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75c5a101f82641a1338102991ef6f377d930aa1fdfbf979f7b449e88b6ab661f-rootfs.mount: Deactivated successfully. Sep 10 23:53:31.941639 kubelet[2651]: E0910 23:53:31.941588 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xvkpt" podUID="62bd140d-cac0-41ac-b0b4-d6e71703a322" Sep 10 23:53:32.012999 containerd[1529]: time="2025-09-10T23:53:32.011895438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 23:53:32.040292 kubelet[2651]: I0910 23:53:32.040232 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-574968fcbc-79t4d" podStartSLOduration=3.190585155 podStartE2EDuration="5.040216967s" podCreationTimestamp="2025-09-10 23:53:27 +0000 UTC" firstStartedPulling="2025-09-10 23:53:28.00944001 +0000 UTC m=+20.151417252" lastFinishedPulling="2025-09-10 23:53:29.859071822 +0000 UTC m=+22.001049064" observedRunningTime="2025-09-10 23:53:30.016982128 +0000 UTC m=+22.158959370" watchObservedRunningTime="2025-09-10 23:53:32.040216967 +0000 UTC m=+24.182194209" Sep 10 23:53:33.941507 kubelet[2651]: E0910 23:53:33.941451 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-xvkpt" podUID="62bd140d-cac0-41ac-b0b4-d6e71703a322" Sep 10 23:53:34.664134 containerd[1529]: time="2025-09-10T23:53:34.664068728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:34.664557 containerd[1529]: time="2025-09-10T23:53:34.664500930Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 23:53:34.665241 containerd[1529]: time="2025-09-10T23:53:34.665210293Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:34.667418 containerd[1529]: time="2025-09-10T23:53:34.667378701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:34.667968 containerd[1529]: time="2025-09-10T23:53:34.667929943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.655991025s" Sep 10 23:53:34.667968 containerd[1529]: time="2025-09-10T23:53:34.667962184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 23:53:34.670326 containerd[1529]: time="2025-09-10T23:53:34.670283993Z" level=info msg="CreateContainer within sandbox \"552ee1bc66a707b642b8a9a2a68e093623f1460455e8524f2a22813f1da83c1f\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 23:53:34.677524 containerd[1529]: time="2025-09-10T23:53:34.676990180Z" level=info msg="Container 2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:34.685168 containerd[1529]: time="2025-09-10T23:53:34.685106732Z" level=info msg="CreateContainer within sandbox \"552ee1bc66a707b642b8a9a2a68e093623f1460455e8524f2a22813f1da83c1f\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de\"" Sep 10 23:53:34.686158 containerd[1529]: time="2025-09-10T23:53:34.685858935Z" level=info msg="StartContainer for \"2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de\"" Sep 10 23:53:34.688163 containerd[1529]: time="2025-09-10T23:53:34.687854943Z" level=info msg="connecting to shim 2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de" address="unix:///run/containerd/s/ccc974f200d232f8f5c58046a3c1b7454806922218d76451296c9923ce4b5ec3" protocol=ttrpc version=3 Sep 10 23:53:34.709282 systemd[1]: Started cri-containerd-2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de.scope - libcontainer container 2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de. Sep 10 23:53:34.744929 containerd[1529]: time="2025-09-10T23:53:34.744828531Z" level=info msg="StartContainer for \"2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de\" returns successfully" Sep 10 23:53:35.309760 systemd[1]: cri-containerd-2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de.scope: Deactivated successfully. Sep 10 23:53:35.310223 systemd[1]: cri-containerd-2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de.scope: Consumed 487ms CPU time, 178.5M memory peak, 2.3M read from disk, 165.8M written to disk. Sep 10 23:53:35.318535 containerd[1529]: time="2025-09-10T23:53:35.318470624Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de\" id:\"2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de\" pid:3446 exited_at:{seconds:1757548415 nanos:318130622}" Sep 10 23:53:35.325861 containerd[1529]: time="2025-09-10T23:53:35.325702171Z" level=info msg="received exit event container_id:\"2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de\" id:\"2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de\" pid:3446 exited_at:{seconds:1757548415 nanos:318130622}" Sep 10 23:53:35.345084 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b25f17fa0cf0cc6658904cde8f389294b422c6983ba48a92de3fa2fbe2e49de-rootfs.mount: Deactivated successfully. Sep 10 23:53:35.350008 kubelet[2651]: I0910 23:53:35.349963 2651 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 23:53:35.429471 systemd[1]: Created slice kubepods-burstable-podd4330244_faa7_4151_a455_862c6ed1a966.slice - libcontainer container kubepods-burstable-podd4330244_faa7_4151_a455_862c6ed1a966.slice. Sep 10 23:53:35.452253 systemd[1]: Created slice kubepods-besteffort-podba6e97a8_80cf_4e25_a0fc_9481e05eee18.slice - libcontainer container kubepods-besteffort-podba6e97a8_80cf_4e25_a0fc_9481e05eee18.slice. Sep 10 23:53:35.507602 systemd[1]: Created slice kubepods-burstable-poda659cd15_573d_41fb_8f1f_438acd6270eb.slice - libcontainer container kubepods-burstable-poda659cd15_573d_41fb_8f1f_438acd6270eb.slice. Sep 10 23:53:35.515213 systemd[1]: Created slice kubepods-besteffort-pod0119b3bf_690b_439e_8d9e_19ed6e7482fa.slice - libcontainer container kubepods-besteffort-pod0119b3bf_690b_439e_8d9e_19ed6e7482fa.slice. Sep 10 23:53:35.523971 systemd[1]: Created slice kubepods-besteffort-podb3ce6a54_d271_47e5_8b07_284987db4893.slice - libcontainer container kubepods-besteffort-podb3ce6a54_d271_47e5_8b07_284987db4893.slice. Sep 10 23:53:35.531304 kubelet[2651]: I0910 23:53:35.531207 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d4330244-faa7-4151-a455-862c6ed1a966-config-volume\") pod \"coredns-668d6bf9bc-89tdh\" (UID: \"d4330244-faa7-4151-a455-862c6ed1a966\") " pod="kube-system/coredns-668d6bf9bc-89tdh" Sep 10 23:53:35.531304 kubelet[2651]: I0910 23:53:35.531254 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrdtd\" (UniqueName: \"kubernetes.io/projected/ba6e97a8-80cf-4e25-a0fc-9481e05eee18-kube-api-access-lrdtd\") pod \"calico-kube-controllers-777d545795-gnhhs\" (UID: \"ba6e97a8-80cf-4e25-a0fc-9481e05eee18\") " pod="calico-system/calico-kube-controllers-777d545795-gnhhs" Sep 10 23:53:35.531304 kubelet[2651]: I0910 23:53:35.531276 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tvnf\" (UniqueName: \"kubernetes.io/projected/d4330244-faa7-4151-a455-862c6ed1a966-kube-api-access-9tvnf\") pod \"coredns-668d6bf9bc-89tdh\" (UID: \"d4330244-faa7-4151-a455-862c6ed1a966\") " pod="kube-system/coredns-668d6bf9bc-89tdh" Sep 10 23:53:35.531304 kubelet[2651]: I0910 23:53:35.531293 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a659cd15-573d-41fb-8f1f-438acd6270eb-config-volume\") pod \"coredns-668d6bf9bc-gcdfs\" (UID: \"a659cd15-573d-41fb-8f1f-438acd6270eb\") " pod="kube-system/coredns-668d6bf9bc-gcdfs" Sep 10 23:53:35.531304 kubelet[2651]: I0910 23:53:35.531308 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6p7\" (UniqueName: \"kubernetes.io/projected/a659cd15-573d-41fb-8f1f-438acd6270eb-kube-api-access-6q6p7\") pod \"coredns-668d6bf9bc-gcdfs\" (UID: \"a659cd15-573d-41fb-8f1f-438acd6270eb\") " pod="kube-system/coredns-668d6bf9bc-gcdfs" Sep 10 23:53:35.531573 kubelet[2651]: I0910 23:53:35.531330 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba6e97a8-80cf-4e25-a0fc-9481e05eee18-tigera-ca-bundle\") pod \"calico-kube-controllers-777d545795-gnhhs\" (UID: \"ba6e97a8-80cf-4e25-a0fc-9481e05eee18\") " pod="calico-system/calico-kube-controllers-777d545795-gnhhs" Sep 10 23:53:35.535199 systemd[1]: Created slice kubepods-besteffort-pod056ea6d9_2a10_4480_8eca_64f1c74b82cc.slice - libcontainer container kubepods-besteffort-pod056ea6d9_2a10_4480_8eca_64f1c74b82cc.slice. Sep 10 23:53:35.542376 systemd[1]: Created slice kubepods-besteffort-podddb49d47_44fb_41a9_b130_dc83abf59baa.slice - libcontainer container kubepods-besteffort-podddb49d47_44fb_41a9_b130_dc83abf59baa.slice. Sep 10 23:53:35.632640 kubelet[2651]: I0910 23:53:35.632589 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0119b3bf-690b-439e-8d9e-19ed6e7482fa-whisker-backend-key-pair\") pod \"whisker-7596c9d7f7-92n65\" (UID: \"0119b3bf-690b-439e-8d9e-19ed6e7482fa\") " pod="calico-system/whisker-7596c9d7f7-92n65" Sep 10 23:53:35.632774 kubelet[2651]: I0910 23:53:35.632673 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/056ea6d9-2a10-4480-8eca-64f1c74b82cc-calico-apiserver-certs\") pod \"calico-apiserver-78b859ff9c-ks5wz\" (UID: \"056ea6d9-2a10-4480-8eca-64f1c74b82cc\") " pod="calico-apiserver/calico-apiserver-78b859ff9c-ks5wz" Sep 10 23:53:35.632774 kubelet[2651]: I0910 23:53:35.632695 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hljv\" (UniqueName: \"kubernetes.io/projected/0119b3bf-690b-439e-8d9e-19ed6e7482fa-kube-api-access-8hljv\") pod \"whisker-7596c9d7f7-92n65\" (UID: \"0119b3bf-690b-439e-8d9e-19ed6e7482fa\") " pod="calico-system/whisker-7596c9d7f7-92n65" Sep 10 23:53:35.632774 kubelet[2651]: I0910 23:53:35.632713 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlx8\" (UniqueName: \"kubernetes.io/projected/056ea6d9-2a10-4480-8eca-64f1c74b82cc-kube-api-access-7rlx8\") pod \"calico-apiserver-78b859ff9c-ks5wz\" (UID: \"056ea6d9-2a10-4480-8eca-64f1c74b82cc\") " pod="calico-apiserver/calico-apiserver-78b859ff9c-ks5wz" Sep 10 23:53:35.632774 kubelet[2651]: I0910 23:53:35.632744 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ddb49d47-44fb-41a9-b130-dc83abf59baa-config\") pod \"goldmane-54d579b49d-wk7td\" (UID: \"ddb49d47-44fb-41a9-b130-dc83abf59baa\") " pod="calico-system/goldmane-54d579b49d-wk7td" Sep 10 23:53:35.632774 kubelet[2651]: I0910 23:53:35.632765 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ddb49d47-44fb-41a9-b130-dc83abf59baa-goldmane-key-pair\") pod \"goldmane-54d579b49d-wk7td\" (UID: \"ddb49d47-44fb-41a9-b130-dc83abf59baa\") " pod="calico-system/goldmane-54d579b49d-wk7td" Sep 10 23:53:35.632896 kubelet[2651]: I0910 23:53:35.632784 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b3ce6a54-d271-47e5-8b07-284987db4893-calico-apiserver-certs\") pod \"calico-apiserver-78b859ff9c-r7vkr\" (UID: \"b3ce6a54-d271-47e5-8b07-284987db4893\") " pod="calico-apiserver/calico-apiserver-78b859ff9c-r7vkr" Sep 10 23:53:35.632896 kubelet[2651]: I0910 23:53:35.632803 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xqgb\" (UniqueName: \"kubernetes.io/projected/b3ce6a54-d271-47e5-8b07-284987db4893-kube-api-access-9xqgb\") pod \"calico-apiserver-78b859ff9c-r7vkr\" (UID: \"b3ce6a54-d271-47e5-8b07-284987db4893\") " pod="calico-apiserver/calico-apiserver-78b859ff9c-r7vkr" Sep 10 23:53:35.632896 kubelet[2651]: I0910 23:53:35.632821 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddb49d47-44fb-41a9-b130-dc83abf59baa-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-wk7td\" (UID: \"ddb49d47-44fb-41a9-b130-dc83abf59baa\") " pod="calico-system/goldmane-54d579b49d-wk7td" Sep 10 23:53:35.632896 kubelet[2651]: I0910 23:53:35.632836 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qhjs\" (UniqueName: \"kubernetes.io/projected/ddb49d47-44fb-41a9-b130-dc83abf59baa-kube-api-access-9qhjs\") pod \"goldmane-54d579b49d-wk7td\" (UID: \"ddb49d47-44fb-41a9-b130-dc83abf59baa\") " pod="calico-system/goldmane-54d579b49d-wk7td" Sep 10 23:53:35.632896 kubelet[2651]: I0910 23:53:35.632864 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0119b3bf-690b-439e-8d9e-19ed6e7482fa-whisker-ca-bundle\") pod \"whisker-7596c9d7f7-92n65\" (UID: \"0119b3bf-690b-439e-8d9e-19ed6e7482fa\") " pod="calico-system/whisker-7596c9d7f7-92n65" Sep 10 23:53:35.733507 kubelet[2651]: E0910 23:53:35.733466 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:35.734391 containerd[1529]: time="2025-09-10T23:53:35.734336342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89tdh,Uid:d4330244-faa7-4151-a455-862c6ed1a966,Namespace:kube-system,Attempt:0,}" Sep 10 23:53:35.781882 containerd[1529]: time="2025-09-10T23:53:35.781772320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777d545795-gnhhs,Uid:ba6e97a8-80cf-4e25-a0fc-9481e05eee18,Namespace:calico-system,Attempt:0,}" Sep 10 23:53:35.813229 kubelet[2651]: E0910 23:53:35.813192 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:35.814232 containerd[1529]: time="2025-09-10T23:53:35.813641959Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gcdfs,Uid:a659cd15-573d-41fb-8f1f-438acd6270eb,Namespace:kube-system,Attempt:0,}" Sep 10 23:53:35.822462 containerd[1529]: time="2025-09-10T23:53:35.822420192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7596c9d7f7-92n65,Uid:0119b3bf-690b-439e-8d9e-19ed6e7482fa,Namespace:calico-system,Attempt:0,}" Sep 10 23:53:35.830293 containerd[1529]: time="2025-09-10T23:53:35.830214421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b859ff9c-r7vkr,Uid:b3ce6a54-d271-47e5-8b07-284987db4893,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:53:35.840729 containerd[1529]: time="2025-09-10T23:53:35.840683180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b859ff9c-ks5wz,Uid:056ea6d9-2a10-4480-8eca-64f1c74b82cc,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:53:35.841326 containerd[1529]: time="2025-09-10T23:53:35.840940421Z" level=error msg="Failed to destroy network for sandbox \"55cab95097ee6fff33fc4299265b6b0e2c13301b3b8e43921412e4d80d260c70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.843792 containerd[1529]: time="2025-09-10T23:53:35.843750512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777d545795-gnhhs,Uid:ba6e97a8-80cf-4e25-a0fc-9481e05eee18,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55cab95097ee6fff33fc4299265b6b0e2c13301b3b8e43921412e4d80d260c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.846795 kubelet[2651]: E0910 23:53:35.846748 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55cab95097ee6fff33fc4299265b6b0e2c13301b3b8e43921412e4d80d260c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.847940 containerd[1529]: time="2025-09-10T23:53:35.847879287Z" level=error msg="Failed to destroy network for sandbox \"2213050980124e3ac28391cccc0cd441decd448e9d41198b400da29902d50b89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.849345 containerd[1529]: time="2025-09-10T23:53:35.849248852Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89tdh,Uid:d4330244-faa7-4151-a455-862c6ed1a966,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2213050980124e3ac28391cccc0cd441decd448e9d41198b400da29902d50b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.849345 containerd[1529]: time="2025-09-10T23:53:35.849492133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wk7td,Uid:ddb49d47-44fb-41a9-b130-dc83abf59baa,Namespace:calico-system,Attempt:0,}" Sep 10 23:53:35.849971 kubelet[2651]: E0910 23:53:35.849329 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55cab95097ee6fff33fc4299265b6b0e2c13301b3b8e43921412e4d80d260c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-777d545795-gnhhs" Sep 10 23:53:35.849971 kubelet[2651]: E0910 23:53:35.849375 2651 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55cab95097ee6fff33fc4299265b6b0e2c13301b3b8e43921412e4d80d260c70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-777d545795-gnhhs" Sep 10 23:53:35.849971 kubelet[2651]: E0910 23:53:35.849430 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2213050980124e3ac28391cccc0cd441decd448e9d41198b400da29902d50b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.849971 kubelet[2651]: E0910 23:53:35.849472 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2213050980124e3ac28391cccc0cd441decd448e9d41198b400da29902d50b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89tdh" Sep 10 23:53:35.850095 kubelet[2651]: E0910 23:53:35.849489 2651 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2213050980124e3ac28391cccc0cd441decd448e9d41198b400da29902d50b89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-89tdh" Sep 10 23:53:35.850095 kubelet[2651]: E0910 23:53:35.849520 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-89tdh_kube-system(d4330244-faa7-4151-a455-862c6ed1a966)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-89tdh_kube-system(d4330244-faa7-4151-a455-862c6ed1a966)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2213050980124e3ac28391cccc0cd441decd448e9d41198b400da29902d50b89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-89tdh" podUID="d4330244-faa7-4151-a455-862c6ed1a966" Sep 10 23:53:35.850095 kubelet[2651]: E0910 23:53:35.849430 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-777d545795-gnhhs_calico-system(ba6e97a8-80cf-4e25-a0fc-9481e05eee18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-777d545795-gnhhs_calico-system(ba6e97a8-80cf-4e25-a0fc-9481e05eee18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55cab95097ee6fff33fc4299265b6b0e2c13301b3b8e43921412e4d80d260c70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-777d545795-gnhhs" podUID="ba6e97a8-80cf-4e25-a0fc-9481e05eee18" Sep 10 23:53:35.902721 containerd[1529]: time="2025-09-10T23:53:35.902426612Z" level=error msg="Failed to destroy network for sandbox \"4d1a1ce1c89203189bdb1b3572fc41eba5f57023787ce0277616d2de638b0892\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.903741 containerd[1529]: time="2025-09-10T23:53:35.903671816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7596c9d7f7-92n65,Uid:0119b3bf-690b-439e-8d9e-19ed6e7482fa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1a1ce1c89203189bdb1b3572fc41eba5f57023787ce0277616d2de638b0892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.904125 kubelet[2651]: E0910 23:53:35.904006 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1a1ce1c89203189bdb1b3572fc41eba5f57023787ce0277616d2de638b0892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.904125 kubelet[2651]: E0910 23:53:35.904090 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1a1ce1c89203189bdb1b3572fc41eba5f57023787ce0277616d2de638b0892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7596c9d7f7-92n65" Sep 10 23:53:35.904326 kubelet[2651]: E0910 23:53:35.904252 2651 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4d1a1ce1c89203189bdb1b3572fc41eba5f57023787ce0277616d2de638b0892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7596c9d7f7-92n65" Sep 10 23:53:35.904779 kubelet[2651]: E0910 23:53:35.904314 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7596c9d7f7-92n65_calico-system(0119b3bf-690b-439e-8d9e-19ed6e7482fa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7596c9d7f7-92n65_calico-system(0119b3bf-690b-439e-8d9e-19ed6e7482fa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4d1a1ce1c89203189bdb1b3572fc41eba5f57023787ce0277616d2de638b0892\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7596c9d7f7-92n65" podUID="0119b3bf-690b-439e-8d9e-19ed6e7482fa" Sep 10 23:53:35.905257 containerd[1529]: time="2025-09-10T23:53:35.905198382Z" level=error msg="Failed to destroy network for sandbox \"9eca9aa32c0fecf4fdfaf945a4d2e2fa2d4041b9a7c91ea9f997cbfb80bfbfd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.906389 containerd[1529]: time="2025-09-10T23:53:35.906243146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gcdfs,Uid:a659cd15-573d-41fb-8f1f-438acd6270eb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eca9aa32c0fecf4fdfaf945a4d2e2fa2d4041b9a7c91ea9f997cbfb80bfbfd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.906530 kubelet[2651]: E0910 23:53:35.906484 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eca9aa32c0fecf4fdfaf945a4d2e2fa2d4041b9a7c91ea9f997cbfb80bfbfd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.906530 kubelet[2651]: E0910 23:53:35.906523 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eca9aa32c0fecf4fdfaf945a4d2e2fa2d4041b9a7c91ea9f997cbfb80bfbfd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gcdfs" Sep 10 23:53:35.906530 kubelet[2651]: E0910 23:53:35.906540 2651 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9eca9aa32c0fecf4fdfaf945a4d2e2fa2d4041b9a7c91ea9f997cbfb80bfbfd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-gcdfs" Sep 10 23:53:35.906662 kubelet[2651]: E0910 23:53:35.906575 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-gcdfs_kube-system(a659cd15-573d-41fb-8f1f-438acd6270eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-gcdfs_kube-system(a659cd15-573d-41fb-8f1f-438acd6270eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9eca9aa32c0fecf4fdfaf945a4d2e2fa2d4041b9a7c91ea9f997cbfb80bfbfd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-gcdfs" podUID="a659cd15-573d-41fb-8f1f-438acd6270eb" Sep 10 23:53:35.913426 containerd[1529]: time="2025-09-10T23:53:35.913333412Z" level=error msg="Failed to destroy network for sandbox \"b8a5be66f9b8a47c7f9145de8688ce8b8fea20ea6f91a5f1f5c37f3c94cc45a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.914532 containerd[1529]: time="2025-09-10T23:53:35.914497937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b859ff9c-r7vkr,Uid:b3ce6a54-d271-47e5-8b07-284987db4893,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a5be66f9b8a47c7f9145de8688ce8b8fea20ea6f91a5f1f5c37f3c94cc45a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.914805 kubelet[2651]: E0910 23:53:35.914766 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a5be66f9b8a47c7f9145de8688ce8b8fea20ea6f91a5f1f5c37f3c94cc45a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.914858 kubelet[2651]: E0910 23:53:35.914828 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a5be66f9b8a47c7f9145de8688ce8b8fea20ea6f91a5f1f5c37f3c94cc45a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78b859ff9c-r7vkr" Sep 10 23:53:35.914858 kubelet[2651]: E0910 23:53:35.914850 2651 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8a5be66f9b8a47c7f9145de8688ce8b8fea20ea6f91a5f1f5c37f3c94cc45a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78b859ff9c-r7vkr" Sep 10 23:53:35.914968 kubelet[2651]: E0910 23:53:35.914897 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78b859ff9c-r7vkr_calico-apiserver(b3ce6a54-d271-47e5-8b07-284987db4893)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78b859ff9c-r7vkr_calico-apiserver(b3ce6a54-d271-47e5-8b07-284987db4893)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8a5be66f9b8a47c7f9145de8688ce8b8fea20ea6f91a5f1f5c37f3c94cc45a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78b859ff9c-r7vkr" podUID="b3ce6a54-d271-47e5-8b07-284987db4893" Sep 10 23:53:35.921646 containerd[1529]: time="2025-09-10T23:53:35.921541043Z" level=error msg="Failed to destroy network for sandbox \"3b87fe5c42548f9291aa63cc15181f9b11ecf6db595f0c2df472bced7735c6aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.922617 containerd[1529]: time="2025-09-10T23:53:35.922512767Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wk7td,Uid:ddb49d47-44fb-41a9-b130-dc83abf59baa,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b87fe5c42548f9291aa63cc15181f9b11ecf6db595f0c2df472bced7735c6aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.923161 kubelet[2651]: E0910 23:53:35.922954 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b87fe5c42548f9291aa63cc15181f9b11ecf6db595f0c2df472bced7735c6aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.923161 kubelet[2651]: E0910 23:53:35.923025 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b87fe5c42548f9291aa63cc15181f9b11ecf6db595f0c2df472bced7735c6aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-wk7td" Sep 10 23:53:35.923161 kubelet[2651]: E0910 23:53:35.923043 2651 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b87fe5c42548f9291aa63cc15181f9b11ecf6db595f0c2df472bced7735c6aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-wk7td" Sep 10 23:53:35.923290 kubelet[2651]: E0910 23:53:35.923093 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-wk7td_calico-system(ddb49d47-44fb-41a9-b130-dc83abf59baa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-wk7td_calico-system(ddb49d47-44fb-41a9-b130-dc83abf59baa)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b87fe5c42548f9291aa63cc15181f9b11ecf6db595f0c2df472bced7735c6aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-wk7td" podUID="ddb49d47-44fb-41a9-b130-dc83abf59baa" Sep 10 23:53:35.926246 containerd[1529]: time="2025-09-10T23:53:35.926206221Z" level=error msg="Failed to destroy network for sandbox \"131a8dc2e70640963160262dddccfa3a19f51ed816ee0c8d0b1ae6adb82be88d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.927447 containerd[1529]: time="2025-09-10T23:53:35.927414985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b859ff9c-ks5wz,Uid:056ea6d9-2a10-4480-8eca-64f1c74b82cc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"131a8dc2e70640963160262dddccfa3a19f51ed816ee0c8d0b1ae6adb82be88d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.927629 kubelet[2651]: E0910 23:53:35.927592 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"131a8dc2e70640963160262dddccfa3a19f51ed816ee0c8d0b1ae6adb82be88d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.927675 kubelet[2651]: E0910 23:53:35.927645 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"131a8dc2e70640963160262dddccfa3a19f51ed816ee0c8d0b1ae6adb82be88d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78b859ff9c-ks5wz" Sep 10 23:53:35.927698 kubelet[2651]: E0910 23:53:35.927671 2651 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"131a8dc2e70640963160262dddccfa3a19f51ed816ee0c8d0b1ae6adb82be88d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-78b859ff9c-ks5wz" Sep 10 23:53:35.927760 kubelet[2651]: E0910 23:53:35.927709 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-78b859ff9c-ks5wz_calico-apiserver(056ea6d9-2a10-4480-8eca-64f1c74b82cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-78b859ff9c-ks5wz_calico-apiserver(056ea6d9-2a10-4480-8eca-64f1c74b82cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"131a8dc2e70640963160262dddccfa3a19f51ed816ee0c8d0b1ae6adb82be88d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-78b859ff9c-ks5wz" podUID="056ea6d9-2a10-4480-8eca-64f1c74b82cc" Sep 10 23:53:35.946689 systemd[1]: Created slice kubepods-besteffort-pod62bd140d_cac0_41ac_b0b4_d6e71703a322.slice - libcontainer container kubepods-besteffort-pod62bd140d_cac0_41ac_b0b4_d6e71703a322.slice. Sep 10 23:53:35.949391 containerd[1529]: time="2025-09-10T23:53:35.949356907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xvkpt,Uid:62bd140d-cac0-41ac-b0b4-d6e71703a322,Namespace:calico-system,Attempt:0,}" Sep 10 23:53:35.994156 containerd[1529]: time="2025-09-10T23:53:35.994094195Z" level=error msg="Failed to destroy network for sandbox \"34419968b1f07111a261a8a092932a057579c70c4a3d6db9e3e42eb6a1fab956\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.995086 containerd[1529]: time="2025-09-10T23:53:35.995028599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xvkpt,Uid:62bd140d-cac0-41ac-b0b4-d6e71703a322,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"34419968b1f07111a261a8a092932a057579c70c4a3d6db9e3e42eb6a1fab956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.995303 kubelet[2651]: E0910 23:53:35.995247 2651 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34419968b1f07111a261a8a092932a057579c70c4a3d6db9e3e42eb6a1fab956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:53:35.995387 kubelet[2651]: E0910 23:53:35.995313 2651 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34419968b1f07111a261a8a092932a057579c70c4a3d6db9e3e42eb6a1fab956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xvkpt" Sep 10 23:53:35.995387 kubelet[2651]: E0910 23:53:35.995332 2651 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"34419968b1f07111a261a8a092932a057579c70c4a3d6db9e3e42eb6a1fab956\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-xvkpt" Sep 10 23:53:35.995387 kubelet[2651]: E0910 23:53:35.995375 2651 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-xvkpt_calico-system(62bd140d-cac0-41ac-b0b4-d6e71703a322)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-xvkpt_calico-system(62bd140d-cac0-41ac-b0b4-d6e71703a322)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"34419968b1f07111a261a8a092932a057579c70c4a3d6db9e3e42eb6a1fab956\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-xvkpt" podUID="62bd140d-cac0-41ac-b0b4-d6e71703a322" Sep 10 23:53:36.022769 containerd[1529]: time="2025-09-10T23:53:36.022713457Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 23:53:36.679653 systemd[1]: run-netns-cni\x2da8b9e416\x2d2be2\x2db08c\x2d6e6c\x2d3e1cc225287b.mount: Deactivated successfully. Sep 10 23:53:39.683058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3289283895.mount: Deactivated successfully. Sep 10 23:53:39.929363 containerd[1529]: time="2025-09-10T23:53:39.916886204Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 23:53:39.929363 containerd[1529]: time="2025-09-10T23:53:39.919414091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:39.929784 containerd[1529]: time="2025-09-10T23:53:39.919882172Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.897134755s" Sep 10 23:53:39.929784 containerd[1529]: time="2025-09-10T23:53:39.929466320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 23:53:39.930178 containerd[1529]: time="2025-09-10T23:53:39.930119682Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:39.930641 containerd[1529]: time="2025-09-10T23:53:39.930606243Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:39.940302 containerd[1529]: time="2025-09-10T23:53:39.940207151Z" level=info msg="CreateContainer within sandbox \"552ee1bc66a707b642b8a9a2a68e093623f1460455e8524f2a22813f1da83c1f\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 23:53:39.962593 containerd[1529]: time="2025-09-10T23:53:39.962542016Z" level=info msg="Container ee8825b122730548f41542673069aaaef97a60d3edd9a0fd86923f4cad8beb4d: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:39.965273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount852686519.mount: Deactivated successfully. Sep 10 23:53:40.000476 containerd[1529]: time="2025-09-10T23:53:40.000426685Z" level=info msg="CreateContainer within sandbox \"552ee1bc66a707b642b8a9a2a68e093623f1460455e8524f2a22813f1da83c1f\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ee8825b122730548f41542673069aaaef97a60d3edd9a0fd86923f4cad8beb4d\"" Sep 10 23:53:40.001377 containerd[1529]: time="2025-09-10T23:53:40.001350088Z" level=info msg="StartContainer for \"ee8825b122730548f41542673069aaaef97a60d3edd9a0fd86923f4cad8beb4d\"" Sep 10 23:53:40.003387 containerd[1529]: time="2025-09-10T23:53:40.003135893Z" level=info msg="connecting to shim ee8825b122730548f41542673069aaaef97a60d3edd9a0fd86923f4cad8beb4d" address="unix:///run/containerd/s/ccc974f200d232f8f5c58046a3c1b7454806922218d76451296c9923ce4b5ec3" protocol=ttrpc version=3 Sep 10 23:53:40.026318 systemd[1]: Started cri-containerd-ee8825b122730548f41542673069aaaef97a60d3edd9a0fd86923f4cad8beb4d.scope - libcontainer container ee8825b122730548f41542673069aaaef97a60d3edd9a0fd86923f4cad8beb4d. Sep 10 23:53:40.073004 containerd[1529]: time="2025-09-10T23:53:40.072960962Z" level=info msg="StartContainer for \"ee8825b122730548f41542673069aaaef97a60d3edd9a0fd86923f4cad8beb4d\" returns successfully" Sep 10 23:53:40.190218 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 23:53:40.190307 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 23:53:40.463830 kubelet[2651]: I0910 23:53:40.463786 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hljv\" (UniqueName: \"kubernetes.io/projected/0119b3bf-690b-439e-8d9e-19ed6e7482fa-kube-api-access-8hljv\") pod \"0119b3bf-690b-439e-8d9e-19ed6e7482fa\" (UID: \"0119b3bf-690b-439e-8d9e-19ed6e7482fa\") " Sep 10 23:53:40.464234 kubelet[2651]: I0910 23:53:40.463869 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0119b3bf-690b-439e-8d9e-19ed6e7482fa-whisker-ca-bundle\") pod \"0119b3bf-690b-439e-8d9e-19ed6e7482fa\" (UID: \"0119b3bf-690b-439e-8d9e-19ed6e7482fa\") " Sep 10 23:53:40.464234 kubelet[2651]: I0910 23:53:40.463903 2651 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0119b3bf-690b-439e-8d9e-19ed6e7482fa-whisker-backend-key-pair\") pod \"0119b3bf-690b-439e-8d9e-19ed6e7482fa\" (UID: \"0119b3bf-690b-439e-8d9e-19ed6e7482fa\") " Sep 10 23:53:40.464631 kubelet[2651]: I0910 23:53:40.464591 2651 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0119b3bf-690b-439e-8d9e-19ed6e7482fa-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0119b3bf-690b-439e-8d9e-19ed6e7482fa" (UID: "0119b3bf-690b-439e-8d9e-19ed6e7482fa"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:53:40.467695 kubelet[2651]: I0910 23:53:40.467668 2651 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0119b3bf-690b-439e-8d9e-19ed6e7482fa-kube-api-access-8hljv" (OuterVolumeSpecName: "kube-api-access-8hljv") pod "0119b3bf-690b-439e-8d9e-19ed6e7482fa" (UID: "0119b3bf-690b-439e-8d9e-19ed6e7482fa"). InnerVolumeSpecName "kube-api-access-8hljv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:53:40.471315 kubelet[2651]: I0910 23:53:40.471280 2651 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0119b3bf-690b-439e-8d9e-19ed6e7482fa-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0119b3bf-690b-439e-8d9e-19ed6e7482fa" (UID: "0119b3bf-690b-439e-8d9e-19ed6e7482fa"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 23:53:40.564457 kubelet[2651]: I0910 23:53:40.564415 2651 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0119b3bf-690b-439e-8d9e-19ed6e7482fa-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 23:53:40.564457 kubelet[2651]: I0910 23:53:40.564447 2651 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0119b3bf-690b-439e-8d9e-19ed6e7482fa-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 23:53:40.564457 kubelet[2651]: I0910 23:53:40.564457 2651 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8hljv\" (UniqueName: \"kubernetes.io/projected/0119b3bf-690b-439e-8d9e-19ed6e7482fa-kube-api-access-8hljv\") on node \"localhost\" DevicePath \"\"" Sep 10 23:53:40.683865 systemd[1]: var-lib-kubelet-pods-0119b3bf\x2d690b\x2d439e\x2d8d9e\x2d19ed6e7482fa-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8hljv.mount: Deactivated successfully. Sep 10 23:53:40.684180 systemd[1]: var-lib-kubelet-pods-0119b3bf\x2d690b\x2d439e\x2d8d9e\x2d19ed6e7482fa-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 23:53:41.045249 systemd[1]: Removed slice kubepods-besteffort-pod0119b3bf_690b_439e_8d9e_19ed6e7482fa.slice - libcontainer container kubepods-besteffort-pod0119b3bf_690b_439e_8d9e_19ed6e7482fa.slice. Sep 10 23:53:41.060175 kubelet[2651]: I0910 23:53:41.059985 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-snpjh" podStartSLOduration=2.462141627 podStartE2EDuration="14.05997031s" podCreationTimestamp="2025-09-10 23:53:27 +0000 UTC" firstStartedPulling="2025-09-10 23:53:28.335283728 +0000 UTC m=+20.477260970" lastFinishedPulling="2025-09-10 23:53:39.933112411 +0000 UTC m=+32.075089653" observedRunningTime="2025-09-10 23:53:41.059491429 +0000 UTC m=+33.201468711" watchObservedRunningTime="2025-09-10 23:53:41.05997031 +0000 UTC m=+33.201947552" Sep 10 23:53:41.104935 systemd[1]: Created slice kubepods-besteffort-podf1a06d9e_157a_4500_9f6e_5edaa8b66c4b.slice - libcontainer container kubepods-besteffort-podf1a06d9e_157a_4500_9f6e_5edaa8b66c4b.slice. Sep 10 23:53:41.270074 kubelet[2651]: I0910 23:53:41.270012 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f1a06d9e-157a-4500-9f6e-5edaa8b66c4b-whisker-backend-key-pair\") pod \"whisker-587675c788-fcsbq\" (UID: \"f1a06d9e-157a-4500-9f6e-5edaa8b66c4b\") " pod="calico-system/whisker-587675c788-fcsbq" Sep 10 23:53:41.270074 kubelet[2651]: I0910 23:53:41.270073 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1a06d9e-157a-4500-9f6e-5edaa8b66c4b-whisker-ca-bundle\") pod \"whisker-587675c788-fcsbq\" (UID: \"f1a06d9e-157a-4500-9f6e-5edaa8b66c4b\") " pod="calico-system/whisker-587675c788-fcsbq" Sep 10 23:53:41.270253 kubelet[2651]: I0910 23:53:41.270108 2651 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtqpg\" (UniqueName: \"kubernetes.io/projected/f1a06d9e-157a-4500-9f6e-5edaa8b66c4b-kube-api-access-qtqpg\") pod \"whisker-587675c788-fcsbq\" (UID: \"f1a06d9e-157a-4500-9f6e-5edaa8b66c4b\") " pod="calico-system/whisker-587675c788-fcsbq" Sep 10 23:53:41.408700 containerd[1529]: time="2025-09-10T23:53:41.408662717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-587675c788-fcsbq,Uid:f1a06d9e-157a-4500-9f6e-5edaa8b66c4b,Namespace:calico-system,Attempt:0,}" Sep 10 23:53:41.658738 systemd-networkd[1433]: cali9295119cf47: Link UP Sep 10 23:53:41.658912 systemd-networkd[1433]: cali9295119cf47: Gained carrier Sep 10 23:53:41.671657 containerd[1529]: 2025-09-10 23:53:41.429 [INFO][3820] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:53:41.671657 containerd[1529]: 2025-09-10 23:53:41.466 [INFO][3820] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--587675c788--fcsbq-eth0 whisker-587675c788- calico-system f1a06d9e-157a-4500-9f6e-5edaa8b66c4b 871 0 2025-09-10 23:53:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:587675c788 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-587675c788-fcsbq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali9295119cf47 [] [] }} ContainerID="0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" Namespace="calico-system" Pod="whisker-587675c788-fcsbq" WorkloadEndpoint="localhost-k8s-whisker--587675c788--fcsbq-" Sep 10 23:53:41.671657 containerd[1529]: 2025-09-10 23:53:41.466 [INFO][3820] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" Namespace="calico-system" Pod="whisker-587675c788-fcsbq" WorkloadEndpoint="localhost-k8s-whisker--587675c788--fcsbq-eth0" Sep 10 23:53:41.671657 containerd[1529]: 2025-09-10 23:53:41.602 [INFO][3894] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" HandleID="k8s-pod-network.0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" Workload="localhost-k8s-whisker--587675c788--fcsbq-eth0" Sep 10 23:53:41.671876 containerd[1529]: 2025-09-10 23:53:41.602 [INFO][3894] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" HandleID="k8s-pod-network.0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" Workload="localhost-k8s-whisker--587675c788--fcsbq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005d93a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-587675c788-fcsbq", "timestamp":"2025-09-10 23:53:41.602830451 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:53:41.671876 containerd[1529]: 2025-09-10 23:53:41.603 [INFO][3894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:53:41.671876 containerd[1529]: 2025-09-10 23:53:41.603 [INFO][3894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:53:41.671876 containerd[1529]: 2025-09-10 23:53:41.603 [INFO][3894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:53:41.671876 containerd[1529]: 2025-09-10 23:53:41.616 [INFO][3894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" host="localhost" Sep 10 23:53:41.671876 containerd[1529]: 2025-09-10 23:53:41.623 [INFO][3894] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:53:41.671876 containerd[1529]: 2025-09-10 23:53:41.633 [INFO][3894] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:53:41.671876 containerd[1529]: 2025-09-10 23:53:41.635 [INFO][3894] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:41.671876 containerd[1529]: 2025-09-10 23:53:41.638 [INFO][3894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:41.671876 containerd[1529]: 2025-09-10 23:53:41.639 [INFO][3894] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" host="localhost" Sep 10 23:53:41.672132 containerd[1529]: 2025-09-10 23:53:41.640 [INFO][3894] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c Sep 10 23:53:41.672132 containerd[1529]: 2025-09-10 23:53:41.644 [INFO][3894] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" host="localhost" Sep 10 23:53:41.672132 containerd[1529]: 2025-09-10 23:53:41.650 [INFO][3894] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" host="localhost" Sep 10 23:53:41.672132 containerd[1529]: 2025-09-10 23:53:41.650 [INFO][3894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" host="localhost" Sep 10 23:53:41.672132 containerd[1529]: 2025-09-10 23:53:41.650 [INFO][3894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:53:41.672132 containerd[1529]: 2025-09-10 23:53:41.650 [INFO][3894] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" HandleID="k8s-pod-network.0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" Workload="localhost-k8s-whisker--587675c788--fcsbq-eth0" Sep 10 23:53:41.672263 containerd[1529]: 2025-09-10 23:53:41.652 [INFO][3820] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" Namespace="calico-system" Pod="whisker-587675c788-fcsbq" WorkloadEndpoint="localhost-k8s-whisker--587675c788--fcsbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--587675c788--fcsbq-eth0", GenerateName:"whisker-587675c788-", Namespace:"calico-system", SelfLink:"", UID:"f1a06d9e-157a-4500-9f6e-5edaa8b66c4b", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"587675c788", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-587675c788-fcsbq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9295119cf47", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:41.672263 containerd[1529]: 2025-09-10 23:53:41.653 [INFO][3820] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" Namespace="calico-system" Pod="whisker-587675c788-fcsbq" WorkloadEndpoint="localhost-k8s-whisker--587675c788--fcsbq-eth0" Sep 10 23:53:41.672338 containerd[1529]: 2025-09-10 23:53:41.653 [INFO][3820] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9295119cf47 ContainerID="0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" Namespace="calico-system" Pod="whisker-587675c788-fcsbq" WorkloadEndpoint="localhost-k8s-whisker--587675c788--fcsbq-eth0" Sep 10 23:53:41.672338 containerd[1529]: 2025-09-10 23:53:41.659 [INFO][3820] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" Namespace="calico-system" Pod="whisker-587675c788-fcsbq" WorkloadEndpoint="localhost-k8s-whisker--587675c788--fcsbq-eth0" Sep 10 23:53:41.672376 containerd[1529]: 2025-09-10 23:53:41.659 [INFO][3820] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" Namespace="calico-system" Pod="whisker-587675c788-fcsbq" WorkloadEndpoint="localhost-k8s-whisker--587675c788--fcsbq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--587675c788--fcsbq-eth0", GenerateName:"whisker-587675c788-", Namespace:"calico-system", SelfLink:"", UID:"f1a06d9e-157a-4500-9f6e-5edaa8b66c4b", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"587675c788", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c", Pod:"whisker-587675c788-fcsbq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali9295119cf47", MAC:"ea:d0:35:bb:a7:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:41.672425 containerd[1529]: 2025-09-10 23:53:41.668 [INFO][3820] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" Namespace="calico-system" Pod="whisker-587675c788-fcsbq" WorkloadEndpoint="localhost-k8s-whisker--587675c788--fcsbq-eth0" Sep 10 23:53:41.768578 containerd[1529]: time="2025-09-10T23:53:41.768518833Z" level=info msg="connecting to shim 0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c" address="unix:///run/containerd/s/9ae8077167de0328330426a5dec21249d0ca12e21c611c7ca37d28da58aeb60f" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:41.812293 systemd[1]: Started cri-containerd-0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c.scope - libcontainer container 0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c. Sep 10 23:53:41.823236 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:53:41.847084 containerd[1529]: time="2025-09-10T23:53:41.847026912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-587675c788-fcsbq,Uid:f1a06d9e-157a-4500-9f6e-5edaa8b66c4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c\"" Sep 10 23:53:41.848487 containerd[1529]: time="2025-09-10T23:53:41.848461636Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 23:53:41.951306 kubelet[2651]: I0910 23:53:41.951212 2651 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0119b3bf-690b-439e-8d9e-19ed6e7482fa" path="/var/lib/kubelet/pods/0119b3bf-690b-439e-8d9e-19ed6e7482fa/volumes" Sep 10 23:53:42.046925 kubelet[2651]: I0910 23:53:42.046493 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:53:42.884063 containerd[1529]: time="2025-09-10T23:53:42.884009810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:42.884510 containerd[1529]: time="2025-09-10T23:53:42.884477531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 23:53:42.885197 containerd[1529]: time="2025-09-10T23:53:42.885164212Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:42.894889 containerd[1529]: time="2025-09-10T23:53:42.894860355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:42.895461 containerd[1529]: time="2025-09-10T23:53:42.895432357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.046939921s" Sep 10 23:53:42.895505 containerd[1529]: time="2025-09-10T23:53:42.895460757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 23:53:42.897347 containerd[1529]: time="2025-09-10T23:53:42.897305281Z" level=info msg="CreateContainer within sandbox \"0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 23:53:42.904056 containerd[1529]: time="2025-09-10T23:53:42.903307136Z" level=info msg="Container ee31a87ced34d4ee8c3bd6efde969834f856608c1d7499fd9a7404a0d9361f1b: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:42.910522 containerd[1529]: time="2025-09-10T23:53:42.910494353Z" level=info msg="CreateContainer within sandbox \"0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ee31a87ced34d4ee8c3bd6efde969834f856608c1d7499fd9a7404a0d9361f1b\"" Sep 10 23:53:42.911157 containerd[1529]: time="2025-09-10T23:53:42.911118274Z" level=info msg="StartContainer for \"ee31a87ced34d4ee8c3bd6efde969834f856608c1d7499fd9a7404a0d9361f1b\"" Sep 10 23:53:42.912503 containerd[1529]: time="2025-09-10T23:53:42.912436237Z" level=info msg="connecting to shim ee31a87ced34d4ee8c3bd6efde969834f856608c1d7499fd9a7404a0d9361f1b" address="unix:///run/containerd/s/9ae8077167de0328330426a5dec21249d0ca12e21c611c7ca37d28da58aeb60f" protocol=ttrpc version=3 Sep 10 23:53:42.932293 systemd[1]: Started cri-containerd-ee31a87ced34d4ee8c3bd6efde969834f856608c1d7499fd9a7404a0d9361f1b.scope - libcontainer container ee31a87ced34d4ee8c3bd6efde969834f856608c1d7499fd9a7404a0d9361f1b. Sep 10 23:53:42.962626 containerd[1529]: time="2025-09-10T23:53:42.962521557Z" level=info msg="StartContainer for \"ee31a87ced34d4ee8c3bd6efde969834f856608c1d7499fd9a7404a0d9361f1b\" returns successfully" Sep 10 23:53:42.964119 containerd[1529]: time="2025-09-10T23:53:42.963740280Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 23:53:43.433358 systemd-networkd[1433]: cali9295119cf47: Gained IPv6LL Sep 10 23:53:44.521867 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525099543.mount: Deactivated successfully. Sep 10 23:53:44.634992 containerd[1529]: time="2025-09-10T23:53:44.634894753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:44.636028 containerd[1529]: time="2025-09-10T23:53:44.635817642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 23:53:44.636802 containerd[1529]: time="2025-09-10T23:53:44.636775412Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:44.640549 containerd[1529]: time="2025-09-10T23:53:44.640520169Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:44.641612 containerd[1529]: time="2025-09-10T23:53:44.641568980Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.67779438s" Sep 10 23:53:44.641612 containerd[1529]: time="2025-09-10T23:53:44.641609900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 23:53:44.643825 containerd[1529]: time="2025-09-10T23:53:44.643625000Z" level=info msg="CreateContainer within sandbox \"0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 23:53:44.654172 containerd[1529]: time="2025-09-10T23:53:44.651436278Z" level=info msg="Container 5b925187459aefbb8144fecd86e86c2800a13f0a3dba7dc3ebdea9ab05aeeaca: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:44.663047 containerd[1529]: time="2025-09-10T23:53:44.663009034Z" level=info msg="CreateContainer within sandbox \"0a104b70f7f82ac4a1da9eff63903645e0c4bda858f764755bed2b8c5742a71c\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"5b925187459aefbb8144fecd86e86c2800a13f0a3dba7dc3ebdea9ab05aeeaca\"" Sep 10 23:53:44.664898 containerd[1529]: time="2025-09-10T23:53:44.664628890Z" level=info msg="StartContainer for \"5b925187459aefbb8144fecd86e86c2800a13f0a3dba7dc3ebdea9ab05aeeaca\"" Sep 10 23:53:44.665859 containerd[1529]: time="2025-09-10T23:53:44.665833382Z" level=info msg="connecting to shim 5b925187459aefbb8144fecd86e86c2800a13f0a3dba7dc3ebdea9ab05aeeaca" address="unix:///run/containerd/s/9ae8077167de0328330426a5dec21249d0ca12e21c611c7ca37d28da58aeb60f" protocol=ttrpc version=3 Sep 10 23:53:44.683261 systemd[1]: Started cri-containerd-5b925187459aefbb8144fecd86e86c2800a13f0a3dba7dc3ebdea9ab05aeeaca.scope - libcontainer container 5b925187459aefbb8144fecd86e86c2800a13f0a3dba7dc3ebdea9ab05aeeaca. Sep 10 23:53:44.718341 containerd[1529]: time="2025-09-10T23:53:44.718308666Z" level=info msg="StartContainer for \"5b925187459aefbb8144fecd86e86c2800a13f0a3dba7dc3ebdea9ab05aeeaca\" returns successfully" Sep 10 23:53:45.063886 kubelet[2651]: I0910 23:53:45.063694 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-587675c788-fcsbq" podStartSLOduration=1.269475344 podStartE2EDuration="4.063669416s" podCreationTimestamp="2025-09-10 23:53:41 +0000 UTC" firstStartedPulling="2025-09-10 23:53:41.848122515 +0000 UTC m=+33.990099717" lastFinishedPulling="2025-09-10 23:53:44.642316547 +0000 UTC m=+36.784293789" observedRunningTime="2025-09-10 23:53:45.062319083 +0000 UTC m=+37.204296325" watchObservedRunningTime="2025-09-10 23:53:45.063669416 +0000 UTC m=+37.205646658" Sep 10 23:53:47.947655 containerd[1529]: time="2025-09-10T23:53:47.947407795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777d545795-gnhhs,Uid:ba6e97a8-80cf-4e25-a0fc-9481e05eee18,Namespace:calico-system,Attempt:0,}" Sep 10 23:53:47.947655 containerd[1529]: time="2025-09-10T23:53:47.947428595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xvkpt,Uid:62bd140d-cac0-41ac-b0b4-d6e71703a322,Namespace:calico-system,Attempt:0,}" Sep 10 23:53:48.109223 systemd-networkd[1433]: cali9113ccfac36: Link UP Sep 10 23:53:48.109386 systemd-networkd[1433]: cali9113ccfac36: Gained carrier Sep 10 23:53:48.125000 containerd[1529]: 2025-09-10 23:53:47.993 [INFO][4215] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:53:48.125000 containerd[1529]: 2025-09-10 23:53:48.008 [INFO][4215] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--xvkpt-eth0 csi-node-driver- calico-system 62bd140d-cac0-41ac-b0b4-d6e71703a322 706 0 2025-09-10 23:53:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-xvkpt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9113ccfac36 [] [] }} ContainerID="765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" Namespace="calico-system" Pod="csi-node-driver-xvkpt" WorkloadEndpoint="localhost-k8s-csi--node--driver--xvkpt-" Sep 10 23:53:48.125000 containerd[1529]: 2025-09-10 23:53:48.008 [INFO][4215] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" Namespace="calico-system" Pod="csi-node-driver-xvkpt" WorkloadEndpoint="localhost-k8s-csi--node--driver--xvkpt-eth0" Sep 10 23:53:48.125000 containerd[1529]: 2025-09-10 23:53:48.050 [INFO][4261] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" HandleID="k8s-pod-network.765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" Workload="localhost-k8s-csi--node--driver--xvkpt-eth0" Sep 10 23:53:48.125231 containerd[1529]: 2025-09-10 23:53:48.050 [INFO][4261] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" HandleID="k8s-pod-network.765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" Workload="localhost-k8s-csi--node--driver--xvkpt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d7b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-xvkpt", "timestamp":"2025-09-10 23:53:48.050055964 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:53:48.125231 containerd[1529]: 2025-09-10 23:53:48.050 [INFO][4261] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:53:48.125231 containerd[1529]: 2025-09-10 23:53:48.050 [INFO][4261] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:53:48.125231 containerd[1529]: 2025-09-10 23:53:48.050 [INFO][4261] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:53:48.125231 containerd[1529]: 2025-09-10 23:53:48.071 [INFO][4261] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" host="localhost" Sep 10 23:53:48.125231 containerd[1529]: 2025-09-10 23:53:48.081 [INFO][4261] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:53:48.125231 containerd[1529]: 2025-09-10 23:53:48.086 [INFO][4261] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:53:48.125231 containerd[1529]: 2025-09-10 23:53:48.088 [INFO][4261] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:48.125231 containerd[1529]: 2025-09-10 23:53:48.090 [INFO][4261] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:48.125231 containerd[1529]: 2025-09-10 23:53:48.091 [INFO][4261] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" host="localhost" Sep 10 23:53:48.125434 containerd[1529]: 2025-09-10 23:53:48.092 [INFO][4261] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92 Sep 10 23:53:48.125434 containerd[1529]: 2025-09-10 23:53:48.097 [INFO][4261] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" host="localhost" Sep 10 23:53:48.125434 containerd[1529]: 2025-09-10 23:53:48.102 [INFO][4261] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" host="localhost" Sep 10 23:53:48.125434 containerd[1529]: 2025-09-10 23:53:48.102 [INFO][4261] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" host="localhost" Sep 10 23:53:48.125434 containerd[1529]: 2025-09-10 23:53:48.102 [INFO][4261] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:53:48.125434 containerd[1529]: 2025-09-10 23:53:48.102 [INFO][4261] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" HandleID="k8s-pod-network.765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" Workload="localhost-k8s-csi--node--driver--xvkpt-eth0" Sep 10 23:53:48.125556 containerd[1529]: 2025-09-10 23:53:48.107 [INFO][4215] cni-plugin/k8s.go 418: Populated endpoint ContainerID="765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" Namespace="calico-system" Pod="csi-node-driver-xvkpt" WorkloadEndpoint="localhost-k8s-csi--node--driver--xvkpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xvkpt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"62bd140d-cac0-41ac-b0b4-d6e71703a322", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-xvkpt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9113ccfac36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:48.125612 containerd[1529]: 2025-09-10 23:53:48.107 [INFO][4215] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" Namespace="calico-system" Pod="csi-node-driver-xvkpt" WorkloadEndpoint="localhost-k8s-csi--node--driver--xvkpt-eth0" Sep 10 23:53:48.125612 containerd[1529]: 2025-09-10 23:53:48.107 [INFO][4215] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9113ccfac36 ContainerID="765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" Namespace="calico-system" Pod="csi-node-driver-xvkpt" WorkloadEndpoint="localhost-k8s-csi--node--driver--xvkpt-eth0" Sep 10 23:53:48.125612 containerd[1529]: 2025-09-10 23:53:48.109 [INFO][4215] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" Namespace="calico-system" Pod="csi-node-driver-xvkpt" WorkloadEndpoint="localhost-k8s-csi--node--driver--xvkpt-eth0" Sep 10 23:53:48.125685 containerd[1529]: 2025-09-10 23:53:48.109 [INFO][4215] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" Namespace="calico-system" Pod="csi-node-driver-xvkpt" WorkloadEndpoint="localhost-k8s-csi--node--driver--xvkpt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--xvkpt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"62bd140d-cac0-41ac-b0b4-d6e71703a322", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92", Pod:"csi-node-driver-xvkpt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9113ccfac36", MAC:"5e:bc:e7:5e:eb:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:48.125731 containerd[1529]: 2025-09-10 23:53:48.120 [INFO][4215] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" Namespace="calico-system" Pod="csi-node-driver-xvkpt" WorkloadEndpoint="localhost-k8s-csi--node--driver--xvkpt-eth0" Sep 10 23:53:48.145514 containerd[1529]: time="2025-09-10T23:53:48.145462856Z" level=info msg="connecting to shim 765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92" address="unix:///run/containerd/s/e9a5cc45c60574922e1b688a7d8f3f4e6a65a5e77783ea8990851372c8d556cc" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:48.171394 systemd[1]: Started cri-containerd-765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92.scope - libcontainer container 765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92. Sep 10 23:53:48.188226 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:53:48.213248 systemd-networkd[1433]: cali105e6b845e6: Link UP Sep 10 23:53:48.213505 systemd-networkd[1433]: cali105e6b845e6: Gained carrier Sep 10 23:53:48.217603 containerd[1529]: time="2025-09-10T23:53:48.217560939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-xvkpt,Uid:62bd140d-cac0-41ac-b0b4-d6e71703a322,Namespace:calico-system,Attempt:0,} returns sandbox id \"765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92\"" Sep 10 23:53:48.221349 containerd[1529]: time="2025-09-10T23:53:48.221304252Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 23:53:48.231644 containerd[1529]: 2025-09-10 23:53:47.991 [INFO][4220] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:53:48.231644 containerd[1529]: 2025-09-10 23:53:48.007 [INFO][4220] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0 calico-kube-controllers-777d545795- calico-system ba6e97a8-80cf-4e25-a0fc-9481e05eee18 806 0 2025-09-10 23:53:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:777d545795 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-777d545795-gnhhs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali105e6b845e6 [] [] }} ContainerID="de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" Namespace="calico-system" Pod="calico-kube-controllers-777d545795-gnhhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--777d545795--gnhhs-" Sep 10 23:53:48.231644 containerd[1529]: 2025-09-10 23:53:48.007 [INFO][4220] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" Namespace="calico-system" Pod="calico-kube-controllers-777d545795-gnhhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0" Sep 10 23:53:48.231644 containerd[1529]: 2025-09-10 23:53:48.052 [INFO][4250] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" HandleID="k8s-pod-network.de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" Workload="localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0" Sep 10 23:53:48.231923 containerd[1529]: 2025-09-10 23:53:48.052 [INFO][4250] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" HandleID="k8s-pod-network.de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" Workload="localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400033b630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-777d545795-gnhhs", "timestamp":"2025-09-10 23:53:48.052545946 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:53:48.231923 containerd[1529]: 2025-09-10 23:53:48.052 [INFO][4250] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:53:48.231923 containerd[1529]: 2025-09-10 23:53:48.102 [INFO][4250] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:53:48.231923 containerd[1529]: 2025-09-10 23:53:48.102 [INFO][4250] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:53:48.231923 containerd[1529]: 2025-09-10 23:53:48.173 [INFO][4250] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" host="localhost" Sep 10 23:53:48.231923 containerd[1529]: 2025-09-10 23:53:48.181 [INFO][4250] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:53:48.231923 containerd[1529]: 2025-09-10 23:53:48.187 [INFO][4250] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:53:48.231923 containerd[1529]: 2025-09-10 23:53:48.189 [INFO][4250] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:48.231923 containerd[1529]: 2025-09-10 23:53:48.193 [INFO][4250] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:48.231923 containerd[1529]: 2025-09-10 23:53:48.193 [INFO][4250] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" host="localhost" Sep 10 23:53:48.232153 containerd[1529]: 2025-09-10 23:53:48.195 [INFO][4250] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73 Sep 10 23:53:48.232153 containerd[1529]: 2025-09-10 23:53:48.199 [INFO][4250] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" host="localhost" Sep 10 23:53:48.232153 containerd[1529]: 2025-09-10 23:53:48.205 [INFO][4250] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" host="localhost" Sep 10 23:53:48.232153 containerd[1529]: 2025-09-10 23:53:48.205 [INFO][4250] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" host="localhost" Sep 10 23:53:48.232153 containerd[1529]: 2025-09-10 23:53:48.205 [INFO][4250] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:53:48.232153 containerd[1529]: 2025-09-10 23:53:48.205 [INFO][4250] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" HandleID="k8s-pod-network.de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" Workload="localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0" Sep 10 23:53:48.232278 containerd[1529]: 2025-09-10 23:53:48.209 [INFO][4220] cni-plugin/k8s.go 418: Populated endpoint ContainerID="de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" Namespace="calico-system" Pod="calico-kube-controllers-777d545795-gnhhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0", GenerateName:"calico-kube-controllers-777d545795-", Namespace:"calico-system", SelfLink:"", UID:"ba6e97a8-80cf-4e25-a0fc-9481e05eee18", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"777d545795", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-777d545795-gnhhs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali105e6b845e6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:48.232333 containerd[1529]: 2025-09-10 23:53:48.210 [INFO][4220] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" Namespace="calico-system" Pod="calico-kube-controllers-777d545795-gnhhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0" Sep 10 23:53:48.232333 containerd[1529]: 2025-09-10 23:53:48.210 [INFO][4220] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali105e6b845e6 ContainerID="de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" Namespace="calico-system" Pod="calico-kube-controllers-777d545795-gnhhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0" Sep 10 23:53:48.232333 containerd[1529]: 2025-09-10 23:53:48.216 [INFO][4220] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" Namespace="calico-system" Pod="calico-kube-controllers-777d545795-gnhhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0" Sep 10 23:53:48.232395 containerd[1529]: 2025-09-10 23:53:48.217 [INFO][4220] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" Namespace="calico-system" Pod="calico-kube-controllers-777d545795-gnhhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0", GenerateName:"calico-kube-controllers-777d545795-", Namespace:"calico-system", SelfLink:"", UID:"ba6e97a8-80cf-4e25-a0fc-9481e05eee18", ResourceVersion:"806", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"777d545795", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73", Pod:"calico-kube-controllers-777d545795-gnhhs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali105e6b845e6", MAC:"9e:54:fa:75:07:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:48.232455 containerd[1529]: 2025-09-10 23:53:48.229 [INFO][4220] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" Namespace="calico-system" Pod="calico-kube-controllers-777d545795-gnhhs" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--777d545795--gnhhs-eth0" Sep 10 23:53:48.292390 containerd[1529]: time="2025-09-10T23:53:48.292340166Z" level=info msg="connecting to shim de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73" address="unix:///run/containerd/s/c25798f21264c95fe35db65dce1f714aa2dd896e545e3db5a184614d039b9451" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:48.321295 systemd[1]: Started cri-containerd-de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73.scope - libcontainer container de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73. Sep 10 23:53:48.331971 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:53:48.355301 containerd[1529]: time="2025-09-10T23:53:48.355267088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-777d545795-gnhhs,Uid:ba6e97a8-80cf-4e25-a0fc-9481e05eee18,Namespace:calico-system,Attempt:0,} returns sandbox id \"de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73\"" Sep 10 23:53:48.941212 containerd[1529]: time="2025-09-10T23:53:48.941114716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b859ff9c-r7vkr,Uid:b3ce6a54-d271-47e5-8b07-284987db4893,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:53:49.051979 systemd-networkd[1433]: cali2703ec7929c: Link UP Sep 10 23:53:49.052159 systemd-networkd[1433]: cali2703ec7929c: Gained carrier Sep 10 23:53:49.080311 containerd[1529]: 2025-09-10 23:53:48.962 [INFO][4389] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:53:49.080311 containerd[1529]: 2025-09-10 23:53:48.977 [INFO][4389] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0 calico-apiserver-78b859ff9c- calico-apiserver b3ce6a54-d271-47e5-8b07-284987db4893 804 0 2025-09-10 23:53:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78b859ff9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78b859ff9c-r7vkr eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2703ec7929c [] [] }} ContainerID="6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-r7vkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-" Sep 10 23:53:49.080311 containerd[1529]: 2025-09-10 23:53:48.977 [INFO][4389] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-r7vkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0" Sep 10 23:53:49.080311 containerd[1529]: 2025-09-10 23:53:49.001 [INFO][4402] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" HandleID="k8s-pod-network.6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" Workload="localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0" Sep 10 23:53:49.081558 containerd[1529]: 2025-09-10 23:53:49.001 [INFO][4402] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" HandleID="k8s-pod-network.6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" Workload="localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c580), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78b859ff9c-r7vkr", "timestamp":"2025-09-10 23:53:49.001290493 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:53:49.081558 containerd[1529]: 2025-09-10 23:53:49.001 [INFO][4402] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:53:49.081558 containerd[1529]: 2025-09-10 23:53:49.001 [INFO][4402] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:53:49.081558 containerd[1529]: 2025-09-10 23:53:49.001 [INFO][4402] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:53:49.081558 containerd[1529]: 2025-09-10 23:53:49.021 [INFO][4402] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" host="localhost" Sep 10 23:53:49.081558 containerd[1529]: 2025-09-10 23:53:49.025 [INFO][4402] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:53:49.081558 containerd[1529]: 2025-09-10 23:53:49.029 [INFO][4402] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:53:49.081558 containerd[1529]: 2025-09-10 23:53:49.031 [INFO][4402] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:49.081558 containerd[1529]: 2025-09-10 23:53:49.033 [INFO][4402] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:49.081558 containerd[1529]: 2025-09-10 23:53:49.033 [INFO][4402] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" host="localhost" Sep 10 23:53:49.081770 containerd[1529]: 2025-09-10 23:53:49.036 [INFO][4402] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6 Sep 10 23:53:49.081770 containerd[1529]: 2025-09-10 23:53:49.040 [INFO][4402] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" host="localhost" Sep 10 23:53:49.081770 containerd[1529]: 2025-09-10 23:53:49.045 [INFO][4402] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" host="localhost" Sep 10 23:53:49.081770 containerd[1529]: 2025-09-10 23:53:49.045 [INFO][4402] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" host="localhost" Sep 10 23:53:49.081770 containerd[1529]: 2025-09-10 23:53:49.045 [INFO][4402] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:53:49.081770 containerd[1529]: 2025-09-10 23:53:49.045 [INFO][4402] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" HandleID="k8s-pod-network.6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" Workload="localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0" Sep 10 23:53:49.081883 containerd[1529]: 2025-09-10 23:53:49.048 [INFO][4389] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-r7vkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0", GenerateName:"calico-apiserver-78b859ff9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3ce6a54-d271-47e5-8b07-284987db4893", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78b859ff9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78b859ff9c-r7vkr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2703ec7929c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:49.081931 containerd[1529]: 2025-09-10 23:53:49.048 [INFO][4389] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-r7vkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0" Sep 10 23:53:49.081931 containerd[1529]: 2025-09-10 23:53:49.048 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2703ec7929c ContainerID="6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-r7vkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0" Sep 10 23:53:49.081931 containerd[1529]: 2025-09-10 23:53:49.052 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-r7vkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0" Sep 10 23:53:49.081988 containerd[1529]: 2025-09-10 23:53:49.054 [INFO][4389] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-r7vkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0", GenerateName:"calico-apiserver-78b859ff9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"b3ce6a54-d271-47e5-8b07-284987db4893", ResourceVersion:"804", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78b859ff9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6", Pod:"calico-apiserver-78b859ff9c-r7vkr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2703ec7929c", MAC:"0e:28:a9:35:20:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:49.082036 containerd[1529]: 2025-09-10 23:53:49.074 [INFO][4389] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-r7vkr" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--r7vkr-eth0" Sep 10 23:53:49.120560 containerd[1529]: time="2025-09-10T23:53:49.120365807Z" level=info msg="connecting to shim 6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6" address="unix:///run/containerd/s/c78f1c776159fbb7ecc9d2182512c5149219fd52c7161debe38156e23132f74c" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:49.160284 systemd[1]: Started cri-containerd-6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6.scope - libcontainer container 6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6. Sep 10 23:53:49.176874 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:53:49.221372 containerd[1529]: time="2025-09-10T23:53:49.221234842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b859ff9c-r7vkr,Uid:b3ce6a54-d271-47e5-8b07-284987db4893,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6\"" Sep 10 23:53:49.250851 containerd[1529]: time="2025-09-10T23:53:49.250783339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:49.251335 containerd[1529]: time="2025-09-10T23:53:49.251296503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 23:53:49.252406 containerd[1529]: time="2025-09-10T23:53:49.252179271Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:49.254235 containerd[1529]: time="2025-09-10T23:53:49.254205528Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:49.254919 containerd[1529]: time="2025-09-10T23:53:49.254894774Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.033546161s" Sep 10 23:53:49.254987 containerd[1529]: time="2025-09-10T23:53:49.254922975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 23:53:49.257427 containerd[1529]: time="2025-09-10T23:53:49.257395876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 23:53:49.261048 containerd[1529]: time="2025-09-10T23:53:49.261003147Z" level=info msg="CreateContainer within sandbox \"765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 23:53:49.270566 containerd[1529]: time="2025-09-10T23:53:49.270525150Z" level=info msg="Container 8bb2d5b471f2d21bf5ac115e62c9d0cb8c7900fbfc678c00572a904cf3b1fd27: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:49.276488 containerd[1529]: time="2025-09-10T23:53:49.276451562Z" level=info msg="CreateContainer within sandbox \"765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8bb2d5b471f2d21bf5ac115e62c9d0cb8c7900fbfc678c00572a904cf3b1fd27\"" Sep 10 23:53:49.277600 containerd[1529]: time="2025-09-10T23:53:49.277561531Z" level=info msg="StartContainer for \"8bb2d5b471f2d21bf5ac115e62c9d0cb8c7900fbfc678c00572a904cf3b1fd27\"" Sep 10 23:53:49.280936 containerd[1529]: time="2025-09-10T23:53:49.280890880Z" level=info msg="connecting to shim 8bb2d5b471f2d21bf5ac115e62c9d0cb8c7900fbfc678c00572a904cf3b1fd27" address="unix:///run/containerd/s/e9a5cc45c60574922e1b688a7d8f3f4e6a65a5e77783ea8990851372c8d556cc" protocol=ttrpc version=3 Sep 10 23:53:49.306413 systemd[1]: Started cri-containerd-8bb2d5b471f2d21bf5ac115e62c9d0cb8c7900fbfc678c00572a904cf3b1fd27.scope - libcontainer container 8bb2d5b471f2d21bf5ac115e62c9d0cb8c7900fbfc678c00572a904cf3b1fd27. Sep 10 23:53:49.321266 systemd-networkd[1433]: cali105e6b845e6: Gained IPv6LL Sep 10 23:53:49.338741 containerd[1529]: time="2025-09-10T23:53:49.338709542Z" level=info msg="StartContainer for \"8bb2d5b471f2d21bf5ac115e62c9d0cb8c7900fbfc678c00572a904cf3b1fd27\" returns successfully" Sep 10 23:53:49.513295 systemd-networkd[1433]: cali9113ccfac36: Gained IPv6LL Sep 10 23:53:49.941817 kubelet[2651]: E0910 23:53:49.941768 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:49.942815 containerd[1529]: time="2025-09-10T23:53:49.942184620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b859ff9c-ks5wz,Uid:056ea6d9-2a10-4480-8eca-64f1c74b82cc,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:53:49.942815 containerd[1529]: time="2025-09-10T23:53:49.942498823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gcdfs,Uid:a659cd15-573d-41fb-8f1f-438acd6270eb,Namespace:kube-system,Attempt:0,}" Sep 10 23:53:50.059152 systemd-networkd[1433]: cali3ef5bdd9551: Link UP Sep 10 23:53:50.059730 systemd-networkd[1433]: cali3ef5bdd9551: Gained carrier Sep 10 23:53:50.074718 containerd[1529]: 2025-09-10 23:53:49.963 [INFO][4522] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:53:50.074718 containerd[1529]: 2025-09-10 23:53:49.988 [INFO][4522] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0 calico-apiserver-78b859ff9c- calico-apiserver 056ea6d9-2a10-4480-8eca-64f1c74b82cc 808 0 2025-09-10 23:53:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78b859ff9c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-78b859ff9c-ks5wz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3ef5bdd9551 [] [] }} ContainerID="58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-ks5wz" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-" Sep 10 23:53:50.074718 containerd[1529]: 2025-09-10 23:53:49.988 [INFO][4522] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-ks5wz" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0" Sep 10 23:53:50.074718 containerd[1529]: 2025-09-10 23:53:50.013 [INFO][4558] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" HandleID="k8s-pod-network.58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" Workload="localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0" Sep 10 23:53:50.074916 containerd[1529]: 2025-09-10 23:53:50.013 [INFO][4558] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" HandleID="k8s-pod-network.58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" Workload="localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a9760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-78b859ff9c-ks5wz", "timestamp":"2025-09-10 23:53:50.013457636 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:53:50.074916 containerd[1529]: 2025-09-10 23:53:50.013 [INFO][4558] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:53:50.074916 containerd[1529]: 2025-09-10 23:53:50.013 [INFO][4558] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:53:50.074916 containerd[1529]: 2025-09-10 23:53:50.013 [INFO][4558] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:53:50.074916 containerd[1529]: 2025-09-10 23:53:50.024 [INFO][4558] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" host="localhost" Sep 10 23:53:50.074916 containerd[1529]: 2025-09-10 23:53:50.029 [INFO][4558] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:53:50.074916 containerd[1529]: 2025-09-10 23:53:50.034 [INFO][4558] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:53:50.074916 containerd[1529]: 2025-09-10 23:53:50.037 [INFO][4558] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:50.074916 containerd[1529]: 2025-09-10 23:53:50.039 [INFO][4558] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:50.074916 containerd[1529]: 2025-09-10 23:53:50.039 [INFO][4558] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" host="localhost" Sep 10 23:53:50.075125 containerd[1529]: 2025-09-10 23:53:50.040 [INFO][4558] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c Sep 10 23:53:50.075125 containerd[1529]: 2025-09-10 23:53:50.044 [INFO][4558] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" host="localhost" Sep 10 23:53:50.075125 containerd[1529]: 2025-09-10 23:53:50.050 [INFO][4558] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" host="localhost" Sep 10 23:53:50.075125 containerd[1529]: 2025-09-10 23:53:50.050 [INFO][4558] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" host="localhost" Sep 10 23:53:50.075125 containerd[1529]: 2025-09-10 23:53:50.050 [INFO][4558] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:53:50.075125 containerd[1529]: 2025-09-10 23:53:50.050 [INFO][4558] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" HandleID="k8s-pod-network.58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" Workload="localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0" Sep 10 23:53:50.075255 containerd[1529]: 2025-09-10 23:53:50.052 [INFO][4522] cni-plugin/k8s.go 418: Populated endpoint ContainerID="58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-ks5wz" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0", GenerateName:"calico-apiserver-78b859ff9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"056ea6d9-2a10-4480-8eca-64f1c74b82cc", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78b859ff9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-78b859ff9c-ks5wz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ef5bdd9551", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:50.075305 containerd[1529]: 2025-09-10 23:53:50.056 [INFO][4522] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-ks5wz" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0" Sep 10 23:53:50.075305 containerd[1529]: 2025-09-10 23:53:50.056 [INFO][4522] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3ef5bdd9551 ContainerID="58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-ks5wz" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0" Sep 10 23:53:50.075305 containerd[1529]: 2025-09-10 23:53:50.060 [INFO][4522] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-ks5wz" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0" Sep 10 23:53:50.075365 containerd[1529]: 2025-09-10 23:53:50.060 [INFO][4522] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-ks5wz" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0", GenerateName:"calico-apiserver-78b859ff9c-", Namespace:"calico-apiserver", SelfLink:"", UID:"056ea6d9-2a10-4480-8eca-64f1c74b82cc", ResourceVersion:"808", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78b859ff9c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c", Pod:"calico-apiserver-78b859ff9c-ks5wz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3ef5bdd9551", MAC:"5e:41:0d:bb:ca:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:50.075411 containerd[1529]: 2025-09-10 23:53:50.072 [INFO][4522] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" Namespace="calico-apiserver" Pod="calico-apiserver-78b859ff9c-ks5wz" WorkloadEndpoint="localhost-k8s-calico--apiserver--78b859ff9c--ks5wz-eth0" Sep 10 23:53:50.096293 containerd[1529]: time="2025-09-10T23:53:50.096250495Z" level=info msg="connecting to shim 58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c" address="unix:///run/containerd/s/ec2e15b5e846719285a1a4887a6d82201a4df8bc829c49772bae4b31522e1d13" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:50.122316 systemd[1]: Started cri-containerd-58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c.scope - libcontainer container 58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c. Sep 10 23:53:50.149716 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:53:50.169350 systemd-networkd[1433]: calif06c78f95f2: Link UP Sep 10 23:53:50.170010 systemd-networkd[1433]: calif06c78f95f2: Gained carrier Sep 10 23:53:50.187957 containerd[1529]: time="2025-09-10T23:53:50.187644187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78b859ff9c-ks5wz,Uid:056ea6d9-2a10-4480-8eca-64f1c74b82cc,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c\"" Sep 10 23:53:50.189488 containerd[1529]: 2025-09-10 23:53:49.967 [INFO][4530] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:53:50.189488 containerd[1529]: 2025-09-10 23:53:49.989 [INFO][4530] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0 coredns-668d6bf9bc- kube-system a659cd15-573d-41fb-8f1f-438acd6270eb 807 0 2025-09-10 23:53:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-gcdfs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif06c78f95f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-gcdfs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gcdfs-" Sep 10 23:53:50.189488 containerd[1529]: 2025-09-10 23:53:49.989 [INFO][4530] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-gcdfs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0" Sep 10 23:53:50.189488 containerd[1529]: 2025-09-10 23:53:50.013 [INFO][4557] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" HandleID="k8s-pod-network.d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" Workload="localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0" Sep 10 23:53:50.189657 containerd[1529]: 2025-09-10 23:53:50.013 [INFO][4557] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" HandleID="k8s-pod-network.d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" Workload="localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001a2e30), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-gcdfs", "timestamp":"2025-09-10 23:53:50.013558477 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:53:50.189657 containerd[1529]: 2025-09-10 23:53:50.013 [INFO][4557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:53:50.189657 containerd[1529]: 2025-09-10 23:53:50.050 [INFO][4557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:53:50.189657 containerd[1529]: 2025-09-10 23:53:50.051 [INFO][4557] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:53:50.189657 containerd[1529]: 2025-09-10 23:53:50.124 [INFO][4557] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" host="localhost" Sep 10 23:53:50.189657 containerd[1529]: 2025-09-10 23:53:50.134 [INFO][4557] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:53:50.189657 containerd[1529]: 2025-09-10 23:53:50.140 [INFO][4557] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:53:50.189657 containerd[1529]: 2025-09-10 23:53:50.142 [INFO][4557] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:50.189657 containerd[1529]: 2025-09-10 23:53:50.147 [INFO][4557] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:50.189657 containerd[1529]: 2025-09-10 23:53:50.147 [INFO][4557] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" host="localhost" Sep 10 23:53:50.189852 containerd[1529]: 2025-09-10 23:53:50.149 [INFO][4557] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b Sep 10 23:53:50.189852 containerd[1529]: 2025-09-10 23:53:50.154 [INFO][4557] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" host="localhost" Sep 10 23:53:50.189852 containerd[1529]: 2025-09-10 23:53:50.163 [INFO][4557] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" host="localhost" Sep 10 23:53:50.189852 containerd[1529]: 2025-09-10 23:53:50.163 [INFO][4557] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" host="localhost" Sep 10 23:53:50.189852 containerd[1529]: 2025-09-10 23:53:50.163 [INFO][4557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:53:50.189852 containerd[1529]: 2025-09-10 23:53:50.163 [INFO][4557] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" HandleID="k8s-pod-network.d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" Workload="localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0" Sep 10 23:53:50.189963 containerd[1529]: 2025-09-10 23:53:50.167 [INFO][4530] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-gcdfs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a659cd15-573d-41fb-8f1f-438acd6270eb", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-gcdfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif06c78f95f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:50.190025 containerd[1529]: 2025-09-10 23:53:50.167 [INFO][4530] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-gcdfs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0" Sep 10 23:53:50.190025 containerd[1529]: 2025-09-10 23:53:50.167 [INFO][4530] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif06c78f95f2 ContainerID="d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-gcdfs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0" Sep 10 23:53:50.190025 containerd[1529]: 2025-09-10 23:53:50.171 [INFO][4530] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-gcdfs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0" Sep 10 23:53:50.190097 containerd[1529]: 2025-09-10 23:53:50.172 [INFO][4530] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-gcdfs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a659cd15-573d-41fb-8f1f-438acd6270eb", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b", Pod:"coredns-668d6bf9bc-gcdfs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif06c78f95f2", MAC:"9e:34:46:31:55:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:50.190097 containerd[1529]: 2025-09-10 23:53:50.184 [INFO][4530] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" Namespace="kube-system" Pod="coredns-668d6bf9bc-gcdfs" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--gcdfs-eth0" Sep 10 23:53:50.220404 containerd[1529]: time="2025-09-10T23:53:50.220289982Z" level=info msg="connecting to shim d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b" address="unix:///run/containerd/s/5f3aca062aeca096b80bf61a8677addafcd83edb1620a74ca2836c3329f861b2" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:50.249549 systemd[1]: Started cri-containerd-d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b.scope - libcontainer container d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b. Sep 10 23:53:50.266110 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:53:50.306930 containerd[1529]: time="2025-09-10T23:53:50.303239603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-gcdfs,Uid:a659cd15-573d-41fb-8f1f-438acd6270eb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b\"" Sep 10 23:53:50.319396 kubelet[2651]: E0910 23:53:50.319360 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:50.333567 containerd[1529]: time="2025-09-10T23:53:50.333079975Z" level=info msg="CreateContainer within sandbox \"d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:53:50.601560 containerd[1529]: time="2025-09-10T23:53:50.601440120Z" level=info msg="Container 7780a6e929fd1e0565ecce4300f69a74cbc9c1b632cdc32af6f75e9f68841358: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:50.630681 containerd[1529]: time="2025-09-10T23:53:50.630622247Z" level=info msg="CreateContainer within sandbox \"d0e0f313f65033cb2fd0bd4625889371c160380334c8ad90950ea94cea8ffb2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7780a6e929fd1e0565ecce4300f69a74cbc9c1b632cdc32af6f75e9f68841358\"" Sep 10 23:53:50.631432 containerd[1529]: time="2025-09-10T23:53:50.631403973Z" level=info msg="StartContainer for \"7780a6e929fd1e0565ecce4300f69a74cbc9c1b632cdc32af6f75e9f68841358\"" Sep 10 23:53:50.632211 containerd[1529]: time="2025-09-10T23:53:50.632189660Z" level=info msg="connecting to shim 7780a6e929fd1e0565ecce4300f69a74cbc9c1b632cdc32af6f75e9f68841358" address="unix:///run/containerd/s/5f3aca062aeca096b80bf61a8677addafcd83edb1620a74ca2836c3329f861b2" protocol=ttrpc version=3 Sep 10 23:53:50.655295 systemd[1]: Started cri-containerd-7780a6e929fd1e0565ecce4300f69a74cbc9c1b632cdc32af6f75e9f68841358.scope - libcontainer container 7780a6e929fd1e0565ecce4300f69a74cbc9c1b632cdc32af6f75e9f68841358. Sep 10 23:53:50.685478 containerd[1529]: time="2025-09-10T23:53:50.685385909Z" level=info msg="StartContainer for \"7780a6e929fd1e0565ecce4300f69a74cbc9c1b632cdc32af6f75e9f68841358\" returns successfully" Sep 10 23:53:50.921805 systemd-networkd[1433]: cali2703ec7929c: Gained IPv6LL Sep 10 23:53:50.943343 containerd[1529]: time="2025-09-10T23:53:50.943301327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wk7td,Uid:ddb49d47-44fb-41a9-b130-dc83abf59baa,Namespace:calico-system,Attempt:0,}" Sep 10 23:53:50.943754 kubelet[2651]: E0910 23:53:50.943699 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:50.944294 containerd[1529]: time="2025-09-10T23:53:50.944267095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89tdh,Uid:d4330244-faa7-4151-a455-862c6ed1a966,Namespace:kube-system,Attempt:0,}" Sep 10 23:53:51.090087 kubelet[2651]: E0910 23:53:51.090038 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:51.092330 systemd-networkd[1433]: cali6360f5d926c: Link UP Sep 10 23:53:51.093194 systemd-networkd[1433]: cali6360f5d926c: Gained carrier Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:50.981 [INFO][4740] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:50.999 [INFO][4740] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--wk7td-eth0 goldmane-54d579b49d- calico-system ddb49d47-44fb-41a9-b130-dc83abf59baa 805 0 2025-09-10 23:53:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-wk7td eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali6360f5d926c [] [] }} ContainerID="c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" Namespace="calico-system" Pod="goldmane-54d579b49d-wk7td" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wk7td-" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:50.999 [INFO][4740] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" Namespace="calico-system" Pod="goldmane-54d579b49d-wk7td" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wk7td-eth0" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.041 [INFO][4771] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" HandleID="k8s-pod-network.c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" Workload="localhost-k8s-goldmane--54d579b49d--wk7td-eth0" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.041 [INFO][4771] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" HandleID="k8s-pod-network.c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" Workload="localhost-k8s-goldmane--54d579b49d--wk7td-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd600), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-wk7td", "timestamp":"2025-09-10 23:53:51.041378866 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.041 [INFO][4771] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.041 [INFO][4771] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.041 [INFO][4771] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.053 [INFO][4771] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" host="localhost" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.059 [INFO][4771] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.063 [INFO][4771] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.065 [INFO][4771] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.067 [INFO][4771] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.067 [INFO][4771] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" host="localhost" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.069 [INFO][4771] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2 Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.073 [INFO][4771] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" host="localhost" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.081 [INFO][4771] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" host="localhost" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.081 [INFO][4771] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" host="localhost" Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.081 [INFO][4771] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:53:51.111947 containerd[1529]: 2025-09-10 23:53:51.081 [INFO][4771] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" HandleID="k8s-pod-network.c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" Workload="localhost-k8s-goldmane--54d579b49d--wk7td-eth0" Sep 10 23:53:51.114242 containerd[1529]: 2025-09-10 23:53:51.084 [INFO][4740] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" Namespace="calico-system" Pod="goldmane-54d579b49d-wk7td" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wk7td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--wk7td-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ddb49d47-44fb-41a9-b130-dc83abf59baa", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-wk7td", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6360f5d926c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:51.114242 containerd[1529]: 2025-09-10 23:53:51.084 [INFO][4740] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" Namespace="calico-system" Pod="goldmane-54d579b49d-wk7td" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wk7td-eth0" Sep 10 23:53:51.114242 containerd[1529]: 2025-09-10 23:53:51.084 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6360f5d926c ContainerID="c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" Namespace="calico-system" Pod="goldmane-54d579b49d-wk7td" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wk7td-eth0" Sep 10 23:53:51.114242 containerd[1529]: 2025-09-10 23:53:51.092 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" Namespace="calico-system" Pod="goldmane-54d579b49d-wk7td" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wk7td-eth0" Sep 10 23:53:51.114242 containerd[1529]: 2025-09-10 23:53:51.093 [INFO][4740] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" Namespace="calico-system" Pod="goldmane-54d579b49d-wk7td" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wk7td-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--wk7td-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ddb49d47-44fb-41a9-b130-dc83abf59baa", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2", Pod:"goldmane-54d579b49d-wk7td", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali6360f5d926c", MAC:"66:08:ef:a2:6f:8f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:51.114242 containerd[1529]: 2025-09-10 23:53:51.105 [INFO][4740] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" Namespace="calico-system" Pod="goldmane-54d579b49d-wk7td" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--wk7td-eth0" Sep 10 23:53:51.145714 kubelet[2651]: I0910 23:53:51.145653 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-gcdfs" podStartSLOduration=37.145632882 podStartE2EDuration="37.145632882s" podCreationTimestamp="2025-09-10 23:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:53:51.119331666 +0000 UTC m=+43.261308908" watchObservedRunningTime="2025-09-10 23:53:51.145632882 +0000 UTC m=+43.287610124" Sep 10 23:53:51.200812 systemd[1]: Started sshd@7-10.0.0.82:22-10.0.0.1:50562.service - OpenSSH per-connection server daemon (10.0.0.1:50562). Sep 10 23:53:51.253544 systemd-networkd[1433]: cali0a562865353: Link UP Sep 10 23:53:51.255514 containerd[1529]: time="2025-09-10T23:53:51.253827011Z" level=info msg="connecting to shim c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2" address="unix:///run/containerd/s/b82ef1b37d128679c22ac274847927279355b8bafbc8bfcc45ce5c187b8316cd" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:51.254787 systemd-networkd[1433]: cali0a562865353: Gained carrier Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.003 [INFO][4746] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.024 [INFO][4746] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--89tdh-eth0 coredns-668d6bf9bc- kube-system d4330244-faa7-4151-a455-862c6ed1a966 798 0 2025-09-10 23:53:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-89tdh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0a562865353 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" Namespace="kube-system" Pod="coredns-668d6bf9bc-89tdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89tdh-" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.024 [INFO][4746] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" Namespace="kube-system" Pod="coredns-668d6bf9bc-89tdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89tdh-eth0" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.063 [INFO][4778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" HandleID="k8s-pod-network.d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" Workload="localhost-k8s-coredns--668d6bf9bc--89tdh-eth0" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.063 [INFO][4778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" HandleID="k8s-pod-network.d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" Workload="localhost-k8s-coredns--668d6bf9bc--89tdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136460), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-89tdh", "timestamp":"2025-09-10 23:53:51.06265372 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.064 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.081 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.082 [INFO][4778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.152 [INFO][4778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" host="localhost" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.203 [INFO][4778] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.211 [INFO][4778] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.215 [INFO][4778] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.219 [INFO][4778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.219 [INFO][4778] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" host="localhost" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.222 [INFO][4778] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26 Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.228 [INFO][4778] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" host="localhost" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.239 [INFO][4778] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" host="localhost" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.239 [INFO][4778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" host="localhost" Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.239 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:53:51.286199 containerd[1529]: 2025-09-10 23:53:51.239 [INFO][4778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" HandleID="k8s-pod-network.d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" Workload="localhost-k8s-coredns--668d6bf9bc--89tdh-eth0" Sep 10 23:53:51.286819 containerd[1529]: 2025-09-10 23:53:51.249 [INFO][4746] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" Namespace="kube-system" Pod="coredns-668d6bf9bc-89tdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89tdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--89tdh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4330244-faa7-4151-a455-862c6ed1a966", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-89tdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a562865353", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:51.286819 containerd[1529]: 2025-09-10 23:53:51.249 [INFO][4746] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" Namespace="kube-system" Pod="coredns-668d6bf9bc-89tdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89tdh-eth0" Sep 10 23:53:51.286819 containerd[1529]: 2025-09-10 23:53:51.249 [INFO][4746] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0a562865353 ContainerID="d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" Namespace="kube-system" Pod="coredns-668d6bf9bc-89tdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89tdh-eth0" Sep 10 23:53:51.286819 containerd[1529]: 2025-09-10 23:53:51.251 [INFO][4746] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" Namespace="kube-system" Pod="coredns-668d6bf9bc-89tdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89tdh-eth0" Sep 10 23:53:51.286819 containerd[1529]: 2025-09-10 23:53:51.254 [INFO][4746] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" Namespace="kube-system" Pod="coredns-668d6bf9bc-89tdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89tdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--89tdh-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d4330244-faa7-4151-a455-862c6ed1a966", ResourceVersion:"798", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 53, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26", Pod:"coredns-668d6bf9bc-89tdh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0a562865353", MAC:"2e:5f:99:a9:e2:cc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:53:51.286819 containerd[1529]: 2025-09-10 23:53:51.271 [INFO][4746] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" Namespace="kube-system" Pod="coredns-668d6bf9bc-89tdh" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--89tdh-eth0" Sep 10 23:53:51.293960 sshd[4798]: Accepted publickey for core from 10.0.0.1 port 50562 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:53:51.293377 systemd[1]: Started cri-containerd-c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2.scope - libcontainer container c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2. Sep 10 23:53:51.296095 sshd-session[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:53:51.306995 systemd-logind[1514]: New session 8 of user core. Sep 10 23:53:51.310418 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:53:51.316516 containerd[1529]: time="2025-09-10T23:53:51.314776792Z" level=info msg="connecting to shim d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26" address="unix:///run/containerd/s/fb0777b83c8b1436c878481b0484425fd6bce16bae6502b9315b969dafd4ece1" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:53:51.343325 systemd[1]: Started cri-containerd-d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26.scope - libcontainer container d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26. Sep 10 23:53:51.354553 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:53:51.364656 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:53:51.432593 containerd[1529]: time="2025-09-10T23:53:51.432544279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-89tdh,Uid:d4330244-faa7-4151-a455-862c6ed1a966,Namespace:kube-system,Attempt:0,} returns sandbox id \"d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26\"" Sep 10 23:53:51.436352 containerd[1529]: time="2025-09-10T23:53:51.435900787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-wk7td,Uid:ddb49d47-44fb-41a9-b130-dc83abf59baa,Namespace:calico-system,Attempt:0,} returns sandbox id \"c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2\"" Sep 10 23:53:51.436706 kubelet[2651]: E0910 23:53:51.436340 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:51.439423 containerd[1529]: time="2025-09-10T23:53:51.439355935Z" level=info msg="CreateContainer within sandbox \"d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:53:51.456301 containerd[1529]: time="2025-09-10T23:53:51.456218433Z" level=info msg="Container a0c5104a120c808aba55a06a99ce2c272de48c7479564a28753c3b35487800a5: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:51.465555 containerd[1529]: time="2025-09-10T23:53:51.465514870Z" level=info msg="CreateContainer within sandbox \"d2099a824f03e354b89425d279c63408d4aaeab2eabff020eb7cb920a322ca26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0c5104a120c808aba55a06a99ce2c272de48c7479564a28753c3b35487800a5\"" Sep 10 23:53:51.466772 containerd[1529]: time="2025-09-10T23:53:51.466737600Z" level=info msg="StartContainer for \"a0c5104a120c808aba55a06a99ce2c272de48c7479564a28753c3b35487800a5\"" Sep 10 23:53:51.469338 containerd[1529]: time="2025-09-10T23:53:51.469306061Z" level=info msg="connecting to shim a0c5104a120c808aba55a06a99ce2c272de48c7479564a28753c3b35487800a5" address="unix:///run/containerd/s/fb0777b83c8b1436c878481b0484425fd6bce16bae6502b9315b969dafd4ece1" protocol=ttrpc version=3 Sep 10 23:53:51.490280 systemd[1]: Started cri-containerd-a0c5104a120c808aba55a06a99ce2c272de48c7479564a28753c3b35487800a5.scope - libcontainer container a0c5104a120c808aba55a06a99ce2c272de48c7479564a28753c3b35487800a5. Sep 10 23:53:51.499133 systemd-networkd[1433]: calif06c78f95f2: Gained IPv6LL Sep 10 23:53:51.501891 containerd[1529]: time="2025-09-10T23:53:51.501844328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:51.502665 containerd[1529]: time="2025-09-10T23:53:51.502616255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 23:53:51.505367 containerd[1529]: time="2025-09-10T23:53:51.504463750Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:51.507004 containerd[1529]: time="2025-09-10T23:53:51.506963330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:51.507570 containerd[1529]: time="2025-09-10T23:53:51.507398934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.249873697s" Sep 10 23:53:51.507570 containerd[1529]: time="2025-09-10T23:53:51.507429734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 23:53:51.510726 containerd[1529]: time="2025-09-10T23:53:51.509954795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:53:51.518007 containerd[1529]: time="2025-09-10T23:53:51.517977821Z" level=info msg="CreateContainer within sandbox \"de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 23:53:51.528342 containerd[1529]: time="2025-09-10T23:53:51.528232785Z" level=info msg="Container 6ee32e240b64ae9bcbade6932460258ee91aa8583a232bfb4a069867096f6404: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:51.531269 containerd[1529]: time="2025-09-10T23:53:51.531237010Z" level=info msg="StartContainer for \"a0c5104a120c808aba55a06a99ce2c272de48c7479564a28753c3b35487800a5\" returns successfully" Sep 10 23:53:51.535366 containerd[1529]: time="2025-09-10T23:53:51.535122242Z" level=info msg="CreateContainer within sandbox \"de159ebac19cd29b16402b6aaaf6c2476434aa2e52c33d73384600c5e2589b73\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6ee32e240b64ae9bcbade6932460258ee91aa8583a232bfb4a069867096f6404\"" Sep 10 23:53:51.538334 containerd[1529]: time="2025-09-10T23:53:51.537286619Z" level=info msg="StartContainer for \"6ee32e240b64ae9bcbade6932460258ee91aa8583a232bfb4a069867096f6404\"" Sep 10 23:53:51.538883 containerd[1529]: time="2025-09-10T23:53:51.538616950Z" level=info msg="connecting to shim 6ee32e240b64ae9bcbade6932460258ee91aa8583a232bfb4a069867096f6404" address="unix:///run/containerd/s/c25798f21264c95fe35db65dce1f714aa2dd896e545e3db5a184614d039b9451" protocol=ttrpc version=3 Sep 10 23:53:51.565300 systemd[1]: Started cri-containerd-6ee32e240b64ae9bcbade6932460258ee91aa8583a232bfb4a069867096f6404.scope - libcontainer container 6ee32e240b64ae9bcbade6932460258ee91aa8583a232bfb4a069867096f6404. Sep 10 23:53:51.613706 containerd[1529]: time="2025-09-10T23:53:51.613670647Z" level=info msg="StartContainer for \"6ee32e240b64ae9bcbade6932460258ee91aa8583a232bfb4a069867096f6404\" returns successfully" Sep 10 23:53:51.635658 sshd[4871]: Connection closed by 10.0.0.1 port 50562 Sep 10 23:53:51.636020 sshd-session[4798]: pam_unix(sshd:session): session closed for user core Sep 10 23:53:51.639350 systemd-logind[1514]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:53:51.639584 systemd[1]: sshd@7-10.0.0.82:22-10.0.0.1:50562.service: Deactivated successfully. Sep 10 23:53:51.641454 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:53:51.643151 systemd-logind[1514]: Removed session 8. Sep 10 23:53:51.753291 systemd-networkd[1433]: cali3ef5bdd9551: Gained IPv6LL Sep 10 23:53:52.106172 kubelet[2651]: E0910 23:53:52.106127 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:52.109955 kubelet[2651]: E0910 23:53:52.109925 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:52.116105 kubelet[2651]: I0910 23:53:52.116039 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-777d545795-gnhhs" podStartSLOduration=20.963330338 podStartE2EDuration="24.116017188s" podCreationTimestamp="2025-09-10 23:53:28 +0000 UTC" firstStartedPulling="2025-09-10 23:53:48.356330017 +0000 UTC m=+40.498307259" lastFinishedPulling="2025-09-10 23:53:51.509016867 +0000 UTC m=+43.650994109" observedRunningTime="2025-09-10 23:53:52.114327894 +0000 UTC m=+44.256305136" watchObservedRunningTime="2025-09-10 23:53:52.116017188 +0000 UTC m=+44.257994430" Sep 10 23:53:52.649265 systemd-networkd[1433]: cali0a562865353: Gained IPv6LL Sep 10 23:53:52.649530 systemd-networkd[1433]: cali6360f5d926c: Gained IPv6LL Sep 10 23:53:53.112010 kubelet[2651]: I0910 23:53:53.111985 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:53:53.112595 kubelet[2651]: E0910 23:53:53.112289 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:53.112595 kubelet[2651]: E0910 23:53:53.112511 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:53.194488 containerd[1529]: time="2025-09-10T23:53:53.194444645Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:53.194927 containerd[1529]: time="2025-09-10T23:53:53.194901329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 23:53:53.195839 containerd[1529]: time="2025-09-10T23:53:53.195787975Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:53.197631 containerd[1529]: time="2025-09-10T23:53:53.197581549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:53.198112 containerd[1529]: time="2025-09-10T23:53:53.198069033Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.687336752s" Sep 10 23:53:53.198188 containerd[1529]: time="2025-09-10T23:53:53.198111714Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:53:53.199305 containerd[1529]: time="2025-09-10T23:53:53.199278483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 23:53:53.202406 containerd[1529]: time="2025-09-10T23:53:53.202367547Z" level=info msg="CreateContainer within sandbox \"6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:53:53.209977 containerd[1529]: time="2025-09-10T23:53:53.209269560Z" level=info msg="Container 8d724d4460a1ebd87d75156819a03790df1f834032eaf5a5ac25bf1de2be8a4a: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:53.215344 containerd[1529]: time="2025-09-10T23:53:53.215303127Z" level=info msg="CreateContainer within sandbox \"6848d4b822d7a6eb307baabbfacdb6f0777bb87725d118b13dd5d62c7cebd3a6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8d724d4460a1ebd87d75156819a03790df1f834032eaf5a5ac25bf1de2be8a4a\"" Sep 10 23:53:53.216117 containerd[1529]: time="2025-09-10T23:53:53.215973492Z" level=info msg="StartContainer for \"8d724d4460a1ebd87d75156819a03790df1f834032eaf5a5ac25bf1de2be8a4a\"" Sep 10 23:53:53.217118 containerd[1529]: time="2025-09-10T23:53:53.217092541Z" level=info msg="connecting to shim 8d724d4460a1ebd87d75156819a03790df1f834032eaf5a5ac25bf1de2be8a4a" address="unix:///run/containerd/s/c78f1c776159fbb7ecc9d2182512c5149219fd52c7161debe38156e23132f74c" protocol=ttrpc version=3 Sep 10 23:53:53.237300 systemd[1]: Started cri-containerd-8d724d4460a1ebd87d75156819a03790df1f834032eaf5a5ac25bf1de2be8a4a.scope - libcontainer container 8d724d4460a1ebd87d75156819a03790df1f834032eaf5a5ac25bf1de2be8a4a. Sep 10 23:53:53.275091 containerd[1529]: time="2025-09-10T23:53:53.275047752Z" level=info msg="StartContainer for \"8d724d4460a1ebd87d75156819a03790df1f834032eaf5a5ac25bf1de2be8a4a\" returns successfully" Sep 10 23:53:53.940117 kubelet[2651]: I0910 23:53:53.940072 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:53:53.940851 kubelet[2651]: E0910 23:53:53.940508 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:53.963335 kubelet[2651]: I0910 23:53:53.962994 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-89tdh" podStartSLOduration=39.962976582 podStartE2EDuration="39.962976582s" podCreationTimestamp="2025-09-10 23:53:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:53:52.12638243 +0000 UTC m=+44.268359832" watchObservedRunningTime="2025-09-10 23:53:53.962976582 +0000 UTC m=+46.104953824" Sep 10 23:53:54.121119 kubelet[2651]: E0910 23:53:54.121069 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 10 23:53:54.662165 containerd[1529]: time="2025-09-10T23:53:54.662081122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:54.662993 containerd[1529]: time="2025-09-10T23:53:54.662926928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 23:53:54.664636 containerd[1529]: time="2025-09-10T23:53:54.664595421Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:54.667043 containerd[1529]: time="2025-09-10T23:53:54.667010759Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:54.668054 containerd[1529]: time="2025-09-10T23:53:54.668009006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.468699883s" Sep 10 23:53:54.668054 containerd[1529]: time="2025-09-10T23:53:54.668045807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 23:53:54.670479 containerd[1529]: time="2025-09-10T23:53:54.670444305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:53:54.672780 containerd[1529]: time="2025-09-10T23:53:54.672745842Z" level=info msg="CreateContainer within sandbox \"765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 23:53:54.682168 containerd[1529]: time="2025-09-10T23:53:54.680163498Z" level=info msg="Container a377660c16ff186710ec73dedfd125c06ae4fa24097820be78b00ae093b11def: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:54.690495 containerd[1529]: time="2025-09-10T23:53:54.690393216Z" level=info msg="CreateContainer within sandbox \"765c51d36dfcc81fc403d9935f8fd52c4c17e0ea032c14dfe55b34d96229db92\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a377660c16ff186710ec73dedfd125c06ae4fa24097820be78b00ae093b11def\"" Sep 10 23:53:54.691615 containerd[1529]: time="2025-09-10T23:53:54.691588065Z" level=info msg="StartContainer for \"a377660c16ff186710ec73dedfd125c06ae4fa24097820be78b00ae093b11def\"" Sep 10 23:53:54.696379 containerd[1529]: time="2025-09-10T23:53:54.696345821Z" level=info msg="connecting to shim a377660c16ff186710ec73dedfd125c06ae4fa24097820be78b00ae093b11def" address="unix:///run/containerd/s/e9a5cc45c60574922e1b688a7d8f3f4e6a65a5e77783ea8990851372c8d556cc" protocol=ttrpc version=3 Sep 10 23:53:54.723318 systemd[1]: Started cri-containerd-a377660c16ff186710ec73dedfd125c06ae4fa24097820be78b00ae093b11def.scope - libcontainer container a377660c16ff186710ec73dedfd125c06ae4fa24097820be78b00ae093b11def. Sep 10 23:53:54.735493 kubelet[2651]: I0910 23:53:54.735460 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:53:54.777716 containerd[1529]: time="2025-09-10T23:53:54.777681957Z" level=info msg="StartContainer for \"a377660c16ff186710ec73dedfd125c06ae4fa24097820be78b00ae093b11def\" returns successfully" Sep 10 23:53:54.882991 containerd[1529]: time="2025-09-10T23:53:54.882292588Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee8825b122730548f41542673069aaaef97a60d3edd9a0fd86923f4cad8beb4d\" id:\"8c63816724960c0c1d8438704bcbf24a251cab214d8eb1127208db0b84b3f823\" pid:5214 exit_status:1 exited_at:{seconds:1757548434 nanos:877173230}" Sep 10 23:53:54.968413 containerd[1529]: time="2025-09-10T23:53:54.968290359Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee8825b122730548f41542673069aaaef97a60d3edd9a0fd86923f4cad8beb4d\" id:\"8ee947460edd31cf9233acebc3fe9b2c0b98539743f0a76499b1cc66290c6c96\" pid:5239 exit_status:1 exited_at:{seconds:1757548434 nanos:967901996}" Sep 10 23:53:55.017054 kubelet[2651]: I0910 23:53:55.017011 2651 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 23:53:55.021885 kubelet[2651]: I0910 23:53:55.021845 2651 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 23:53:55.120857 containerd[1529]: time="2025-09-10T23:53:55.120809849Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:55.121427 containerd[1529]: time="2025-09-10T23:53:55.121319813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 10 23:53:55.123075 containerd[1529]: time="2025-09-10T23:53:55.123022226Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 452.542961ms" Sep 10 23:53:55.123075 containerd[1529]: time="2025-09-10T23:53:55.123056626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:53:55.124410 containerd[1529]: time="2025-09-10T23:53:55.124384156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 23:53:55.126345 containerd[1529]: time="2025-09-10T23:53:55.125476444Z" level=info msg="CreateContainer within sandbox \"58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:53:55.129156 kubelet[2651]: I0910 23:53:55.128825 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:53:55.140060 kubelet[2651]: I0910 23:53:55.140012 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-xvkpt" podStartSLOduration=20.691381462 podStartE2EDuration="27.139995751s" podCreationTimestamp="2025-09-10 23:53:28 +0000 UTC" firstStartedPulling="2025-09-10 23:53:48.220898249 +0000 UTC m=+40.362875491" lastFinishedPulling="2025-09-10 23:53:54.669512538 +0000 UTC m=+46.811489780" observedRunningTime="2025-09-10 23:53:55.139536467 +0000 UTC m=+47.281513709" watchObservedRunningTime="2025-09-10 23:53:55.139995751 +0000 UTC m=+47.281972993" Sep 10 23:53:55.140355 kubelet[2651]: I0910 23:53:55.140320 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78b859ff9c-r7vkr" podStartSLOduration=27.164772734 podStartE2EDuration="31.140314753s" podCreationTimestamp="2025-09-10 23:53:24 +0000 UTC" firstStartedPulling="2025-09-10 23:53:49.223507102 +0000 UTC m=+41.365484304" lastFinishedPulling="2025-09-10 23:53:53.199049081 +0000 UTC m=+45.341026323" observedRunningTime="2025-09-10 23:53:54.134185606 +0000 UTC m=+46.276162888" watchObservedRunningTime="2025-09-10 23:53:55.140314753 +0000 UTC m=+47.282291995" Sep 10 23:53:55.154310 containerd[1529]: time="2025-09-10T23:53:55.154254136Z" level=info msg="Container cc4eb91280533dbca30e1a9b82e2bda12334be43c276bfc1207b1ed0bb1af6d1: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:55.163592 containerd[1529]: time="2025-09-10T23:53:55.163552684Z" level=info msg="CreateContainer within sandbox \"58243a130325b1f48016484a966944a0e0c6c3a8f92843d34fbf410dde19bf8c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"cc4eb91280533dbca30e1a9b82e2bda12334be43c276bfc1207b1ed0bb1af6d1\"" Sep 10 23:53:55.164257 containerd[1529]: time="2025-09-10T23:53:55.164215329Z" level=info msg="StartContainer for \"cc4eb91280533dbca30e1a9b82e2bda12334be43c276bfc1207b1ed0bb1af6d1\"" Sep 10 23:53:55.165544 containerd[1529]: time="2025-09-10T23:53:55.165511859Z" level=info msg="connecting to shim cc4eb91280533dbca30e1a9b82e2bda12334be43c276bfc1207b1ed0bb1af6d1" address="unix:///run/containerd/s/ec2e15b5e846719285a1a4887a6d82201a4df8bc829c49772bae4b31522e1d13" protocol=ttrpc version=3 Sep 10 23:53:55.188328 systemd[1]: Started cri-containerd-cc4eb91280533dbca30e1a9b82e2bda12334be43c276bfc1207b1ed0bb1af6d1.scope - libcontainer container cc4eb91280533dbca30e1a9b82e2bda12334be43c276bfc1207b1ed0bb1af6d1. Sep 10 23:53:55.263659 systemd-networkd[1433]: vxlan.calico: Link UP Sep 10 23:53:55.263666 systemd-networkd[1433]: vxlan.calico: Gained carrier Sep 10 23:53:55.282422 containerd[1529]: time="2025-09-10T23:53:55.282386720Z" level=info msg="StartContainer for \"cc4eb91280533dbca30e1a9b82e2bda12334be43c276bfc1207b1ed0bb1af6d1\" returns successfully" Sep 10 23:53:56.145901 kubelet[2651]: I0910 23:53:56.145585 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78b859ff9c-ks5wz" podStartSLOduration=27.21216275 podStartE2EDuration="32.145567651s" podCreationTimestamp="2025-09-10 23:53:24 +0000 UTC" firstStartedPulling="2025-09-10 23:53:50.19037293 +0000 UTC m=+42.332350172" lastFinishedPulling="2025-09-10 23:53:55.123777831 +0000 UTC m=+47.265755073" observedRunningTime="2025-09-10 23:53:56.145221848 +0000 UTC m=+48.287199090" watchObservedRunningTime="2025-09-10 23:53:56.145567651 +0000 UTC m=+48.287544893" Sep 10 23:53:56.653347 systemd[1]: Started sshd@8-10.0.0.82:22-10.0.0.1:50564.service - OpenSSH per-connection server daemon (10.0.0.1:50564). Sep 10 23:53:56.733169 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 50564 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:53:56.735994 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:53:56.742896 systemd-logind[1514]: New session 9 of user core. Sep 10 23:53:56.746284 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Sep 10 23:53:56.749366 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:53:56.876925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2238695627.mount: Deactivated successfully. Sep 10 23:53:56.932474 sshd[5401]: Connection closed by 10.0.0.1 port 50564 Sep 10 23:53:56.933340 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Sep 10 23:53:56.937547 systemd[1]: sshd@8-10.0.0.82:22-10.0.0.1:50564.service: Deactivated successfully. Sep 10 23:53:56.942963 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:53:56.945433 systemd-logind[1514]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:53:56.946999 systemd-logind[1514]: Removed session 9. Sep 10 23:53:57.135385 kubelet[2651]: I0910 23:53:57.135348 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:53:57.277734 containerd[1529]: time="2025-09-10T23:53:57.276777351Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:57.277734 containerd[1529]: time="2025-09-10T23:53:57.277691837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 23:53:57.278647 containerd[1529]: time="2025-09-10T23:53:57.278622204Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:57.281184 containerd[1529]: time="2025-09-10T23:53:57.281130701Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:53:57.281762 containerd[1529]: time="2025-09-10T23:53:57.281729266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.157319109s" Sep 10 23:53:57.281827 containerd[1529]: time="2025-09-10T23:53:57.281765346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 23:53:57.298845 containerd[1529]: time="2025-09-10T23:53:57.298782745Z" level=info msg="CreateContainer within sandbox \"c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 23:53:57.307329 containerd[1529]: time="2025-09-10T23:53:57.307290884Z" level=info msg="Container ebf339801332e088505ed7e770bf2d94b270817e31a93b0e7cded6933d1b167a: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:53:57.311267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3476332454.mount: Deactivated successfully. Sep 10 23:53:57.318018 containerd[1529]: time="2025-09-10T23:53:57.317980959Z" level=info msg="CreateContainer within sandbox \"c7291a55979e9a4b8583be709d8992f53df23d0caca9b4adf3753758367282f2\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"ebf339801332e088505ed7e770bf2d94b270817e31a93b0e7cded6933d1b167a\"" Sep 10 23:53:57.321937 containerd[1529]: time="2025-09-10T23:53:57.320266695Z" level=info msg="StartContainer for \"ebf339801332e088505ed7e770bf2d94b270817e31a93b0e7cded6933d1b167a\"" Sep 10 23:53:57.322207 containerd[1529]: time="2025-09-10T23:53:57.322178308Z" level=info msg="connecting to shim ebf339801332e088505ed7e770bf2d94b270817e31a93b0e7cded6933d1b167a" address="unix:///run/containerd/s/b82ef1b37d128679c22ac274847927279355b8bafbc8bfcc45ce5c187b8316cd" protocol=ttrpc version=3 Sep 10 23:53:57.353408 systemd[1]: Started cri-containerd-ebf339801332e088505ed7e770bf2d94b270817e31a93b0e7cded6933d1b167a.scope - libcontainer container ebf339801332e088505ed7e770bf2d94b270817e31a93b0e7cded6933d1b167a. Sep 10 23:53:57.399157 containerd[1529]: time="2025-09-10T23:53:57.399098285Z" level=info msg="StartContainer for \"ebf339801332e088505ed7e770bf2d94b270817e31a93b0e7cded6933d1b167a\" returns successfully" Sep 10 23:53:58.179396 kubelet[2651]: I0910 23:53:58.179077 2651 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-wk7td" podStartSLOduration=25.333535511 podStartE2EDuration="31.179057618s" podCreationTimestamp="2025-09-10 23:53:27 +0000 UTC" firstStartedPulling="2025-09-10 23:53:51.444388856 +0000 UTC m=+43.586366098" lastFinishedPulling="2025-09-10 23:53:57.289910963 +0000 UTC m=+49.431888205" observedRunningTime="2025-09-10 23:53:58.175597994 +0000 UTC m=+50.317575236" watchObservedRunningTime="2025-09-10 23:53:58.179057618 +0000 UTC m=+50.321034860" Sep 10 23:53:58.290288 containerd[1529]: time="2025-09-10T23:53:58.290224494Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebf339801332e088505ed7e770bf2d94b270817e31a93b0e7cded6933d1b167a\" id:\"312ed962b21ae2d104757d7b4c3a53a7d8a5ab9f756845a243cc6225fcc57b75\" pid:5469 exit_status:1 exited_at:{seconds:1757548438 nanos:279452141}" Sep 10 23:53:59.244960 containerd[1529]: time="2025-09-10T23:53:59.244734460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebf339801332e088505ed7e770bf2d94b270817e31a93b0e7cded6933d1b167a\" id:\"ee069e175d44d4f65898e8d9db356baa5cbf90bbe07d1e8e796e00bd56a30715\" pid:5496 exit_status:1 exited_at:{seconds:1757548439 nanos:244468058}" Sep 10 23:54:01.951730 systemd[1]: Started sshd@9-10.0.0.82:22-10.0.0.1:58772.service - OpenSSH per-connection server daemon (10.0.0.1:58772). Sep 10 23:54:02.015047 sshd[5516]: Accepted publickey for core from 10.0.0.1 port 58772 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:02.016437 sshd-session[5516]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:02.020531 systemd-logind[1514]: New session 10 of user core. Sep 10 23:54:02.033314 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:54:02.259045 sshd[5518]: Connection closed by 10.0.0.1 port 58772 Sep 10 23:54:02.259348 sshd-session[5516]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:02.271368 systemd[1]: sshd@9-10.0.0.82:22-10.0.0.1:58772.service: Deactivated successfully. Sep 10 23:54:02.272895 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:54:02.273618 systemd-logind[1514]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:54:02.276008 systemd[1]: Started sshd@10-10.0.0.82:22-10.0.0.1:58778.service - OpenSSH per-connection server daemon (10.0.0.1:58778). Sep 10 23:54:02.277501 systemd-logind[1514]: Removed session 10. Sep 10 23:54:02.332464 sshd[5537]: Accepted publickey for core from 10.0.0.1 port 58778 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:02.333654 sshd-session[5537]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:02.337787 systemd-logind[1514]: New session 11 of user core. Sep 10 23:54:02.346278 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:54:02.520035 sshd[5539]: Connection closed by 10.0.0.1 port 58778 Sep 10 23:54:02.520393 sshd-session[5537]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:02.531751 systemd[1]: sshd@10-10.0.0.82:22-10.0.0.1:58778.service: Deactivated successfully. Sep 10 23:54:02.535691 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:54:02.538131 systemd-logind[1514]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:54:02.546819 systemd[1]: Started sshd@11-10.0.0.82:22-10.0.0.1:58792.service - OpenSSH per-connection server daemon (10.0.0.1:58792). Sep 10 23:54:02.547690 systemd-logind[1514]: Removed session 11. Sep 10 23:54:02.606240 sshd[5550]: Accepted publickey for core from 10.0.0.1 port 58792 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:02.607536 sshd-session[5550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:02.612283 systemd-logind[1514]: New session 12 of user core. Sep 10 23:54:02.619300 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:54:02.784506 sshd[5552]: Connection closed by 10.0.0.1 port 58792 Sep 10 23:54:02.784782 sshd-session[5550]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:02.789763 systemd[1]: sshd@11-10.0.0.82:22-10.0.0.1:58792.service: Deactivated successfully. Sep 10 23:54:02.792555 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:54:02.795398 systemd-logind[1514]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:54:02.797593 systemd-logind[1514]: Removed session 12. Sep 10 23:54:03.246991 kubelet[2651]: I0910 23:54:03.246929 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:54:03.286691 containerd[1529]: time="2025-09-10T23:54:03.286548460Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ee32e240b64ae9bcbade6932460258ee91aa8583a232bfb4a069867096f6404\" id:\"d2a6b32c28053325efe5ea1914232153a3f2426c4f274d4a69568893be57643f\" pid:5578 exited_at:{seconds:1757548443 nanos:286324018}" Sep 10 23:54:03.324273 containerd[1529]: time="2025-09-10T23:54:03.324228604Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ee32e240b64ae9bcbade6932460258ee91aa8583a232bfb4a069867096f6404\" id:\"4296ef6a28b7f4b81662ecdc66136f3d199b1ae503c779df36c725da3b704177\" pid:5601 exited_at:{seconds:1757548443 nanos:323976003}" Sep 10 23:54:03.518502 kubelet[2651]: I0910 23:54:03.518393 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:54:07.797389 systemd[1]: Started sshd@12-10.0.0.82:22-10.0.0.1:58804.service - OpenSSH per-connection server daemon (10.0.0.1:58804). Sep 10 23:54:07.869738 sshd[5622]: Accepted publickey for core from 10.0.0.1 port 58804 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:07.871156 sshd-session[5622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:07.875199 systemd-logind[1514]: New session 13 of user core. Sep 10 23:54:07.889283 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:54:08.049855 sshd[5624]: Connection closed by 10.0.0.1 port 58804 Sep 10 23:54:08.050167 sshd-session[5622]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:08.058390 systemd[1]: sshd@12-10.0.0.82:22-10.0.0.1:58804.service: Deactivated successfully. Sep 10 23:54:08.060442 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:54:08.061237 systemd-logind[1514]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:54:08.064561 systemd[1]: Started sshd@13-10.0.0.82:22-10.0.0.1:58814.service - OpenSSH per-connection server daemon (10.0.0.1:58814). Sep 10 23:54:08.065046 systemd-logind[1514]: Removed session 13. Sep 10 23:54:08.120858 sshd[5641]: Accepted publickey for core from 10.0.0.1 port 58814 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:08.122200 sshd-session[5641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:08.126483 systemd-logind[1514]: New session 14 of user core. Sep 10 23:54:08.135285 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:54:08.345864 sshd[5643]: Connection closed by 10.0.0.1 port 58814 Sep 10 23:54:08.345781 sshd-session[5641]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:08.358355 systemd[1]: sshd@13-10.0.0.82:22-10.0.0.1:58814.service: Deactivated successfully. Sep 10 23:54:08.359890 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:54:08.360582 systemd-logind[1514]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:54:08.362884 systemd[1]: Started sshd@14-10.0.0.82:22-10.0.0.1:58826.service - OpenSSH per-connection server daemon (10.0.0.1:58826). Sep 10 23:54:08.363615 systemd-logind[1514]: Removed session 14. Sep 10 23:54:08.414986 sshd[5654]: Accepted publickey for core from 10.0.0.1 port 58826 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:08.416093 sshd-session[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:08.419941 systemd-logind[1514]: New session 15 of user core. Sep 10 23:54:08.429289 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:54:09.022151 sshd[5656]: Connection closed by 10.0.0.1 port 58826 Sep 10 23:54:09.023399 sshd-session[5654]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:09.033628 systemd[1]: sshd@14-10.0.0.82:22-10.0.0.1:58826.service: Deactivated successfully. Sep 10 23:54:09.036341 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:54:09.039744 systemd-logind[1514]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:54:09.042800 systemd[1]: Started sshd@15-10.0.0.82:22-10.0.0.1:58840.service - OpenSSH per-connection server daemon (10.0.0.1:58840). Sep 10 23:54:09.045203 systemd-logind[1514]: Removed session 15. Sep 10 23:54:09.099096 sshd[5674]: Accepted publickey for core from 10.0.0.1 port 58840 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:09.100416 sshd-session[5674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:09.105204 systemd-logind[1514]: New session 16 of user core. Sep 10 23:54:09.115267 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:54:09.414255 sshd[5677]: Connection closed by 10.0.0.1 port 58840 Sep 10 23:54:09.414657 sshd-session[5674]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:09.424314 systemd[1]: sshd@15-10.0.0.82:22-10.0.0.1:58840.service: Deactivated successfully. Sep 10 23:54:09.426050 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:54:09.427777 systemd-logind[1514]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:54:09.430779 systemd[1]: Started sshd@16-10.0.0.82:22-10.0.0.1:58846.service - OpenSSH per-connection server daemon (10.0.0.1:58846). Sep 10 23:54:09.432396 systemd-logind[1514]: Removed session 16. Sep 10 23:54:09.489497 sshd[5688]: Accepted publickey for core from 10.0.0.1 port 58846 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:09.490826 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:09.494896 systemd-logind[1514]: New session 17 of user core. Sep 10 23:54:09.511313 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:54:09.651910 sshd[5690]: Connection closed by 10.0.0.1 port 58846 Sep 10 23:54:09.652327 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:09.655631 systemd[1]: sshd@16-10.0.0.82:22-10.0.0.1:58846.service: Deactivated successfully. Sep 10 23:54:09.657503 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:54:09.659379 systemd-logind[1514]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:54:09.660329 systemd-logind[1514]: Removed session 17. Sep 10 23:54:14.667738 systemd[1]: Started sshd@17-10.0.0.82:22-10.0.0.1:54948.service - OpenSSH per-connection server daemon (10.0.0.1:54948). Sep 10 23:54:14.732603 sshd[5705]: Accepted publickey for core from 10.0.0.1 port 54948 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:14.733976 sshd-session[5705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:14.738246 systemd-logind[1514]: New session 18 of user core. Sep 10 23:54:14.746502 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:54:14.877078 sshd[5708]: Connection closed by 10.0.0.1 port 54948 Sep 10 23:54:14.877420 sshd-session[5705]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:14.881131 systemd[1]: sshd@17-10.0.0.82:22-10.0.0.1:54948.service: Deactivated successfully. Sep 10 23:54:14.882987 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:54:14.883853 systemd-logind[1514]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:54:14.885258 systemd-logind[1514]: Removed session 18. Sep 10 23:54:16.095161 kubelet[2651]: I0910 23:54:16.092668 2651 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:54:18.172497 containerd[1529]: time="2025-09-10T23:54:18.172430395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6ee32e240b64ae9bcbade6932460258ee91aa8583a232bfb4a069867096f6404\" id:\"46eee2876bd921e74e2725605bf1a0ec6e1beab7a18d64f2fda5231c79d71187\" pid:5745 exited_at:{seconds:1757548458 nanos:172152834}" Sep 10 23:54:19.892582 systemd[1]: Started sshd@18-10.0.0.82:22-10.0.0.1:54950.service - OpenSSH per-connection server daemon (10.0.0.1:54950). Sep 10 23:54:19.956548 sshd[5757]: Accepted publickey for core from 10.0.0.1 port 54950 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:19.957757 sshd-session[5757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:19.961385 systemd-logind[1514]: New session 19 of user core. Sep 10 23:54:19.972346 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:54:20.104470 sshd[5759]: Connection closed by 10.0.0.1 port 54950 Sep 10 23:54:20.105767 sshd-session[5757]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:20.109225 systemd-logind[1514]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:54:20.109510 systemd[1]: sshd@18-10.0.0.82:22-10.0.0.1:54950.service: Deactivated successfully. Sep 10 23:54:20.111530 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:54:20.114670 systemd-logind[1514]: Removed session 19. Sep 10 23:54:24.956348 containerd[1529]: time="2025-09-10T23:54:24.956309279Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ee8825b122730548f41542673069aaaef97a60d3edd9a0fd86923f4cad8beb4d\" id:\"91c011eee93dd6b149cc13226cb6c9b8bc3df82e9b32ab4d5e4ab27ba4d0066b\" pid:5786 exited_at:{seconds:1757548464 nanos:955835077}" Sep 10 23:54:25.116375 systemd[1]: Started sshd@19-10.0.0.82:22-10.0.0.1:41094.service - OpenSSH per-connection server daemon (10.0.0.1:41094). Sep 10 23:54:25.186781 sshd[5799]: Accepted publickey for core from 10.0.0.1 port 41094 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:25.188317 sshd-session[5799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:25.192115 systemd-logind[1514]: New session 20 of user core. Sep 10 23:54:25.198330 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 23:54:25.394162 sshd[5801]: Connection closed by 10.0.0.1 port 41094 Sep 10 23:54:25.394681 sshd-session[5799]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:25.398340 systemd[1]: sshd@19-10.0.0.82:22-10.0.0.1:41094.service: Deactivated successfully. Sep 10 23:54:25.400928 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 23:54:25.401909 systemd-logind[1514]: Session 20 logged out. Waiting for processes to exit. Sep 10 23:54:25.403575 systemd-logind[1514]: Removed session 20. Sep 10 23:54:29.248942 containerd[1529]: time="2025-09-10T23:54:29.248899412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebf339801332e088505ed7e770bf2d94b270817e31a93b0e7cded6933d1b167a\" id:\"470bb0bb89b63625c5fa006d6f0257d9542c805911145cfd118674b5fd25eb0e\" pid:5825 exited_at:{seconds:1757548469 nanos:248412370}" Sep 10 23:54:30.405369 systemd[1]: Started sshd@20-10.0.0.82:22-10.0.0.1:52198.service - OpenSSH per-connection server daemon (10.0.0.1:52198). Sep 10 23:54:30.470449 sshd[5842]: Accepted publickey for core from 10.0.0.1 port 52198 ssh2: RSA SHA256:lsmhoLsJ6VkHSnmB7JrdlCWHjclEQMgNfFd+nspwIAE Sep 10 23:54:30.471612 sshd-session[5842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:54:30.475808 systemd-logind[1514]: New session 21 of user core. Sep 10 23:54:30.481284 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 23:54:30.637597 sshd[5844]: Connection closed by 10.0.0.1 port 52198 Sep 10 23:54:30.637988 sshd-session[5842]: pam_unix(sshd:session): session closed for user core Sep 10 23:54:30.643204 systemd[1]: sshd@20-10.0.0.82:22-10.0.0.1:52198.service: Deactivated successfully. Sep 10 23:54:30.645662 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 23:54:30.646574 systemd-logind[1514]: Session 21 logged out. Waiting for processes to exit. Sep 10 23:54:30.647869 systemd-logind[1514]: Removed session 21. Sep 10 23:54:30.941514 kubelet[2651]: E0910 23:54:30.941472 2651 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"