Sep 12 23:48:21.751217 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 23:48:21.751236 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Fri Sep 12 22:15:14 -00 2025 Sep 12 23:48:21.751245 kernel: KASLR enabled Sep 12 23:48:21.751251 kernel: efi: EFI v2.7 by EDK II Sep 12 23:48:21.751256 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 12 23:48:21.751262 kernel: random: crng init done Sep 12 23:48:21.751268 kernel: secureboot: Secure boot disabled Sep 12 23:48:21.751274 kernel: ACPI: Early table checksum verification disabled Sep 12 23:48:21.751279 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 12 23:48:21.751286 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 23:48:21.751292 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:21.751297 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:21.751303 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:21.751308 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:21.751315 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:21.751323 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:21.751329 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:21.751335 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:21.751341 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 23:48:21.751347 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 23:48:21.751352 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 23:48:21.751358 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:48:21.751364 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 12 23:48:21.751428 kernel: Zone ranges: Sep 12 23:48:21.751437 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:48:21.751445 kernel: DMA32 empty Sep 12 23:48:21.751451 kernel: Normal empty Sep 12 23:48:21.751457 kernel: Device empty Sep 12 23:48:21.751463 kernel: Movable zone start for each node Sep 12 23:48:21.751468 kernel: Early memory node ranges Sep 12 23:48:21.751474 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 12 23:48:21.751480 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 12 23:48:21.751486 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 12 23:48:21.751492 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 12 23:48:21.751498 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 12 23:48:21.751504 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 12 23:48:21.751510 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 12 23:48:21.751517 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 12 23:48:21.751523 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 12 23:48:21.751529 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 12 23:48:21.751539 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 12 23:48:21.751545 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 12 23:48:21.751551 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 23:48:21.751559 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 23:48:21.751565 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 23:48:21.751572 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 12 23:48:21.751578 kernel: psci: probing for conduit method from ACPI. Sep 12 23:48:21.751584 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 23:48:21.751591 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 23:48:21.751597 kernel: psci: Trusted OS migration not required Sep 12 23:48:21.751603 kernel: psci: SMC Calling Convention v1.1 Sep 12 23:48:21.751610 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 23:48:21.751616 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 23:48:21.751624 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 23:48:21.751630 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 23:48:21.751636 kernel: Detected PIPT I-cache on CPU0 Sep 12 23:48:21.751643 kernel: CPU features: detected: GIC system register CPU interface Sep 12 23:48:21.751649 kernel: CPU features: detected: Spectre-v4 Sep 12 23:48:21.751655 kernel: CPU features: detected: Spectre-BHB Sep 12 23:48:21.751661 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 23:48:21.751674 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 23:48:21.751680 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 23:48:21.751687 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 23:48:21.751693 kernel: alternatives: applying boot alternatives Sep 12 23:48:21.751700 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=24c67f2f39578656f2256031b807ae9c943b42e628f6df7d0e56546910a5aaaa Sep 12 23:48:21.751708 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 23:48:21.751715 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 23:48:21.751721 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 23:48:21.751728 kernel: Fallback order for Node 0: 0 Sep 12 23:48:21.751734 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 12 23:48:21.751740 kernel: Policy zone: DMA Sep 12 23:48:21.751747 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 23:48:21.751753 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 12 23:48:21.751760 kernel: software IO TLB: area num 4. Sep 12 23:48:21.751766 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 12 23:48:21.751772 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 12 23:48:21.751783 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 23:48:21.751792 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 23:48:21.751802 kernel: rcu: RCU event tracing is enabled. Sep 12 23:48:21.751810 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 23:48:21.751817 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 23:48:21.751823 kernel: Tracing variant of Tasks RCU enabled. Sep 12 23:48:21.751831 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 23:48:21.751837 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 23:48:21.751844 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:48:21.751851 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 23:48:21.751857 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 23:48:21.751865 kernel: GICv3: 256 SPIs implemented Sep 12 23:48:21.751871 kernel: GICv3: 0 Extended SPIs implemented Sep 12 23:48:21.751878 kernel: Root IRQ handler: gic_handle_irq Sep 12 23:48:21.751884 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 23:48:21.751890 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 23:48:21.751896 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 23:48:21.751903 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 23:48:21.751909 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 12 23:48:21.751916 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 12 23:48:21.751923 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 12 23:48:21.751929 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 12 23:48:21.751935 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 23:48:21.751944 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:48:21.751950 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 23:48:21.751957 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 23:48:21.751964 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 23:48:21.751971 kernel: arm-pv: using stolen time PV Sep 12 23:48:21.751978 kernel: Console: colour dummy device 80x25 Sep 12 23:48:21.751984 kernel: ACPI: Core revision 20240827 Sep 12 23:48:21.751991 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 23:48:21.751998 kernel: pid_max: default: 32768 minimum: 301 Sep 12 23:48:21.752004 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 23:48:21.752012 kernel: landlock: Up and running. Sep 12 23:48:21.752019 kernel: SELinux: Initializing. Sep 12 23:48:21.752026 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:48:21.752032 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 23:48:21.752039 kernel: rcu: Hierarchical SRCU implementation. Sep 12 23:48:21.752058 kernel: rcu: Max phase no-delay instances is 400. Sep 12 23:48:21.752065 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 23:48:21.752072 kernel: Remapping and enabling EFI services. Sep 12 23:48:21.752079 kernel: smp: Bringing up secondary CPUs ... Sep 12 23:48:21.752091 kernel: Detected PIPT I-cache on CPU1 Sep 12 23:48:21.752098 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 23:48:21.752105 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 12 23:48:21.752114 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:48:21.752121 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 23:48:21.752127 kernel: Detected PIPT I-cache on CPU2 Sep 12 23:48:21.752134 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 23:48:21.752142 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 12 23:48:21.752150 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:48:21.752156 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 23:48:21.752163 kernel: Detected PIPT I-cache on CPU3 Sep 12 23:48:21.752170 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 23:48:21.752177 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 12 23:48:21.752184 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 23:48:21.752191 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 23:48:21.752198 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 23:48:21.752205 kernel: SMP: Total of 4 processors activated. Sep 12 23:48:21.752213 kernel: CPU: All CPU(s) started at EL1 Sep 12 23:48:21.752220 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 23:48:21.752227 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 23:48:21.752234 kernel: CPU features: detected: Common not Private translations Sep 12 23:48:21.752240 kernel: CPU features: detected: CRC32 instructions Sep 12 23:48:21.752247 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 23:48:21.752254 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 23:48:21.752261 kernel: CPU features: detected: LSE atomic instructions Sep 12 23:48:21.752267 kernel: CPU features: detected: Privileged Access Never Sep 12 23:48:21.752274 kernel: CPU features: detected: RAS Extension Support Sep 12 23:48:21.752282 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 23:48:21.752289 kernel: alternatives: applying system-wide alternatives Sep 12 23:48:21.752296 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 12 23:48:21.752303 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2440K rwdata, 9084K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 12 23:48:21.752310 kernel: devtmpfs: initialized Sep 12 23:48:21.752317 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 23:48:21.752324 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 23:48:21.752331 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 23:48:21.752339 kernel: 0 pages in range for non-PLT usage Sep 12 23:48:21.752345 kernel: 508560 pages in range for PLT usage Sep 12 23:48:21.752352 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 23:48:21.752359 kernel: SMBIOS 3.0.0 present. Sep 12 23:48:21.752366 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 23:48:21.752379 kernel: DMI: Memory slots populated: 1/1 Sep 12 23:48:21.752386 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 23:48:21.752393 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 23:48:21.752410 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 23:48:21.752419 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 23:48:21.752426 kernel: audit: initializing netlink subsys (disabled) Sep 12 23:48:21.752433 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Sep 12 23:48:21.752440 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 23:48:21.752447 kernel: cpuidle: using governor menu Sep 12 23:48:21.752454 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 23:48:21.752461 kernel: ASID allocator initialised with 32768 entries Sep 12 23:48:21.752467 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 23:48:21.752474 kernel: Serial: AMBA PL011 UART driver Sep 12 23:48:21.752483 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 23:48:21.752490 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 23:48:21.752497 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 23:48:21.752504 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 23:48:21.752511 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 23:48:21.752518 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 23:48:21.752525 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 23:48:21.752531 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 23:48:21.752538 kernel: ACPI: Added _OSI(Module Device) Sep 12 23:48:21.752546 kernel: ACPI: Added _OSI(Processor Device) Sep 12 23:48:21.752553 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 23:48:21.752560 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 23:48:21.752567 kernel: ACPI: Interpreter enabled Sep 12 23:48:21.752574 kernel: ACPI: Using GIC for interrupt routing Sep 12 23:48:21.752580 kernel: ACPI: MCFG table detected, 1 entries Sep 12 23:48:21.752587 kernel: ACPI: CPU0 has been hot-added Sep 12 23:48:21.752594 kernel: ACPI: CPU1 has been hot-added Sep 12 23:48:21.752601 kernel: ACPI: CPU2 has been hot-added Sep 12 23:48:21.752607 kernel: ACPI: CPU3 has been hot-added Sep 12 23:48:21.752615 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 23:48:21.752622 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 23:48:21.752630 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 23:48:21.752772 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 23:48:21.752840 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 23:48:21.752900 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 23:48:21.752958 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 23:48:21.753019 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 23:48:21.753028 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 23:48:21.753035 kernel: PCI host bridge to bus 0000:00 Sep 12 23:48:21.753098 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 23:48:21.753153 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 23:48:21.753206 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 23:48:21.753259 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 23:48:21.753336 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 12 23:48:21.753432 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 23:48:21.753498 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 12 23:48:21.753561 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 12 23:48:21.753621 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 23:48:21.753693 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 12 23:48:21.753756 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 12 23:48:21.753821 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 12 23:48:21.753881 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 23:48:21.753936 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 23:48:21.753990 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 23:48:21.753999 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 23:48:21.754006 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 23:48:21.754013 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 23:48:21.754021 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 23:48:21.754028 kernel: iommu: Default domain type: Translated Sep 12 23:48:21.754035 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 23:48:21.754042 kernel: efivars: Registered efivars operations Sep 12 23:48:21.754049 kernel: vgaarb: loaded Sep 12 23:48:21.754056 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 23:48:21.754062 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 23:48:21.754069 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 23:48:21.754076 kernel: pnp: PnP ACPI init Sep 12 23:48:21.754142 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 23:48:21.754151 kernel: pnp: PnP ACPI: found 1 devices Sep 12 23:48:21.754158 kernel: NET: Registered PF_INET protocol family Sep 12 23:48:21.754165 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 23:48:21.754172 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 23:48:21.754179 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 23:48:21.754186 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 23:48:21.754193 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 23:48:21.754201 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 23:48:21.754208 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:48:21.754215 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 23:48:21.754222 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 23:48:21.754229 kernel: PCI: CLS 0 bytes, default 64 Sep 12 23:48:21.754236 kernel: kvm [1]: HYP mode not available Sep 12 23:48:21.754242 kernel: Initialise system trusted keyrings Sep 12 23:48:21.754249 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 23:48:21.754256 kernel: Key type asymmetric registered Sep 12 23:48:21.754264 kernel: Asymmetric key parser 'x509' registered Sep 12 23:48:21.754271 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 23:48:21.754278 kernel: io scheduler mq-deadline registered Sep 12 23:48:21.754284 kernel: io scheduler kyber registered Sep 12 23:48:21.754291 kernel: io scheduler bfq registered Sep 12 23:48:21.754298 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 23:48:21.754305 kernel: ACPI: button: Power Button [PWRB] Sep 12 23:48:21.754313 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 23:48:21.754411 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 23:48:21.754424 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 23:48:21.754431 kernel: thunder_xcv, ver 1.0 Sep 12 23:48:21.754438 kernel: thunder_bgx, ver 1.0 Sep 12 23:48:21.754445 kernel: nicpf, ver 1.0 Sep 12 23:48:21.754451 kernel: nicvf, ver 1.0 Sep 12 23:48:21.754531 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 23:48:21.754593 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T23:48:21 UTC (1757720901) Sep 12 23:48:21.754602 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 23:48:21.754609 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 23:48:21.754618 kernel: watchdog: NMI not fully supported Sep 12 23:48:21.754625 kernel: watchdog: Hard watchdog permanently disabled Sep 12 23:48:21.754632 kernel: NET: Registered PF_INET6 protocol family Sep 12 23:48:21.754639 kernel: Segment Routing with IPv6 Sep 12 23:48:21.754646 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 23:48:21.754652 kernel: NET: Registered PF_PACKET protocol family Sep 12 23:48:21.754659 kernel: Key type dns_resolver registered Sep 12 23:48:21.754672 kernel: registered taskstats version 1 Sep 12 23:48:21.754679 kernel: Loading compiled-in X.509 certificates Sep 12 23:48:21.754689 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 4d2b25dbd7cb4cb70d9284570c2ea7dd89d62e99' Sep 12 23:48:21.754696 kernel: Demotion targets for Node 0: null Sep 12 23:48:21.754702 kernel: Key type .fscrypt registered Sep 12 23:48:21.754709 kernel: Key type fscrypt-provisioning registered Sep 12 23:48:21.754716 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 23:48:21.754723 kernel: ima: Allocated hash algorithm: sha1 Sep 12 23:48:21.754730 kernel: ima: No architecture policies found Sep 12 23:48:21.754737 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 23:48:21.754745 kernel: clk: Disabling unused clocks Sep 12 23:48:21.754751 kernel: PM: genpd: Disabling unused power domains Sep 12 23:48:21.754758 kernel: Warning: unable to open an initial console. Sep 12 23:48:21.754765 kernel: Freeing unused kernel memory: 38976K Sep 12 23:48:21.754772 kernel: Run /init as init process Sep 12 23:48:21.754779 kernel: with arguments: Sep 12 23:48:21.754785 kernel: /init Sep 12 23:48:21.754792 kernel: with environment: Sep 12 23:48:21.754798 kernel: HOME=/ Sep 12 23:48:21.754806 kernel: TERM=linux Sep 12 23:48:21.754814 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 23:48:21.754821 systemd[1]: Successfully made /usr/ read-only. Sep 12 23:48:21.754831 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:48:21.754839 systemd[1]: Detected virtualization kvm. Sep 12 23:48:21.754846 systemd[1]: Detected architecture arm64. Sep 12 23:48:21.754853 systemd[1]: Running in initrd. Sep 12 23:48:21.754860 systemd[1]: No hostname configured, using default hostname. Sep 12 23:48:21.754869 systemd[1]: Hostname set to . Sep 12 23:48:21.754877 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:48:21.754884 systemd[1]: Queued start job for default target initrd.target. Sep 12 23:48:21.754891 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:48:21.754899 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:48:21.754907 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 23:48:21.754914 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:48:21.754922 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 23:48:21.754931 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 23:48:21.754940 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 23:48:21.754947 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 23:48:21.754955 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:48:21.754962 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:48:21.754970 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:48:21.754977 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:48:21.754986 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:48:21.754994 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:48:21.755001 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:48:21.755009 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:48:21.755016 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 23:48:21.755024 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 23:48:21.755031 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:48:21.755039 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:48:21.755047 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:48:21.755055 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:48:21.755062 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 23:48:21.755070 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:48:21.755077 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 23:48:21.755085 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 23:48:21.755093 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 23:48:21.755100 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:48:21.755107 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:48:21.755116 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:48:21.755123 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 23:48:21.755131 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:48:21.755139 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 23:48:21.755164 systemd-journald[245]: Collecting audit messages is disabled. Sep 12 23:48:21.755182 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:48:21.755190 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:48:21.755198 systemd-journald[245]: Journal started Sep 12 23:48:21.755217 systemd-journald[245]: Runtime Journal (/run/log/journal/cd434354324f4e3999b87a0c5795a662) is 6M, max 48.5M, 42.4M free. Sep 12 23:48:21.749348 systemd-modules-load[246]: Inserted module 'overlay' Sep 12 23:48:21.757966 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:48:21.762399 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 23:48:21.763348 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 12 23:48:21.764051 kernel: Bridge firewalling registered Sep 12 23:48:21.765946 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 23:48:21.767415 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:48:21.768853 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:48:21.771875 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:48:21.777879 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:48:21.779922 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:48:21.780948 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 23:48:21.783658 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:48:21.792530 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:48:21.793663 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:48:21.796510 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 23:48:21.798504 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:48:21.799533 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:48:21.812075 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=24c67f2f39578656f2256031b807ae9c943b42e628f6df7d0e56546910a5aaaa Sep 12 23:48:21.825676 systemd-resolved[288]: Positive Trust Anchors: Sep 12 23:48:21.825698 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:48:21.825729 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:48:21.830494 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 12 23:48:21.832575 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:48:21.833464 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:48:21.881409 kernel: SCSI subsystem initialized Sep 12 23:48:21.886390 kernel: Loading iSCSI transport class v2.0-870. Sep 12 23:48:21.893409 kernel: iscsi: registered transport (tcp) Sep 12 23:48:21.905731 kernel: iscsi: registered transport (qla4xxx) Sep 12 23:48:21.905773 kernel: QLogic iSCSI HBA Driver Sep 12 23:48:21.922191 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:48:21.944434 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:48:21.945766 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:48:21.992868 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 23:48:21.994727 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 23:48:22.065407 kernel: raid6: neonx8 gen() 15543 MB/s Sep 12 23:48:22.082387 kernel: raid6: neonx4 gen() 15708 MB/s Sep 12 23:48:22.099386 kernel: raid6: neonx2 gen() 13103 MB/s Sep 12 23:48:22.116385 kernel: raid6: neonx1 gen() 10429 MB/s Sep 12 23:48:22.133384 kernel: raid6: int64x8 gen() 6848 MB/s Sep 12 23:48:22.150385 kernel: raid6: int64x4 gen() 7261 MB/s Sep 12 23:48:22.167390 kernel: raid6: int64x2 gen() 6042 MB/s Sep 12 23:48:22.184396 kernel: raid6: int64x1 gen() 5008 MB/s Sep 12 23:48:22.184429 kernel: raid6: using algorithm neonx4 gen() 15708 MB/s Sep 12 23:48:22.202401 kernel: raid6: .... xor() 12294 MB/s, rmw enabled Sep 12 23:48:22.202444 kernel: raid6: using neon recovery algorithm Sep 12 23:48:22.206797 kernel: xor: measuring software checksum speed Sep 12 23:48:22.206823 kernel: 8regs : 21613 MB/sec Sep 12 23:48:22.207394 kernel: 32regs : 21681 MB/sec Sep 12 23:48:22.208389 kernel: arm64_neon : 26484 MB/sec Sep 12 23:48:22.208403 kernel: xor: using function: arm64_neon (26484 MB/sec) Sep 12 23:48:22.261414 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 23:48:22.267472 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:48:22.269628 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:48:22.304743 systemd-udevd[497]: Using default interface naming scheme 'v255'. Sep 12 23:48:22.308866 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:48:22.310582 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 23:48:22.334966 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Sep 12 23:48:22.358127 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:48:22.360298 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:48:22.415402 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:48:22.417870 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 23:48:22.465479 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 23:48:22.477542 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 23:48:22.481628 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 23:48:22.481644 kernel: GPT:9289727 != 19775487 Sep 12 23:48:22.481653 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 23:48:22.481662 kernel: GPT:9289727 != 19775487 Sep 12 23:48:22.481682 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 23:48:22.481692 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:48:22.470788 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:48:22.470906 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:48:22.480101 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:48:22.482600 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:48:22.503633 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 23:48:22.509442 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:48:22.521397 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 23:48:22.523452 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 23:48:22.535078 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 23:48:22.536139 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 23:48:22.544241 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:48:22.545270 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:48:22.546875 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:48:22.548417 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:48:22.550636 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 23:48:22.552173 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 23:48:22.573101 disk-uuid[588]: Primary Header is updated. Sep 12 23:48:22.573101 disk-uuid[588]: Secondary Entries is updated. Sep 12 23:48:22.573101 disk-uuid[588]: Secondary Header is updated. Sep 12 23:48:22.576185 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:48:22.578457 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:48:23.583409 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 23:48:23.583848 disk-uuid[592]: The operation has completed successfully. Sep 12 23:48:23.612995 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 23:48:23.613107 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 23:48:23.639166 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 23:48:23.666385 sh[609]: Success Sep 12 23:48:23.679168 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 23:48:23.679210 kernel: device-mapper: uevent: version 1.0.3 Sep 12 23:48:23.679238 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 23:48:23.686801 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 23:48:23.711923 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 23:48:23.714526 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 23:48:23.729053 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 23:48:23.735295 kernel: BTRFS: device fsid 103b8b46-5d84-49b9-83b1-52780b53e7b3 devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (622) Sep 12 23:48:23.735334 kernel: BTRFS info (device dm-0): first mount of filesystem 103b8b46-5d84-49b9-83b1-52780b53e7b3 Sep 12 23:48:23.735426 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:48:23.739537 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 23:48:23.739572 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 23:48:23.740626 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 23:48:23.741745 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:48:23.742808 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 23:48:23.743623 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 23:48:23.746278 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 23:48:23.768513 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (651) Sep 12 23:48:23.768573 kernel: BTRFS info (device vda6): first mount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:48:23.770400 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:48:23.773403 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:48:23.773441 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:48:23.778390 kernel: BTRFS info (device vda6): last unmount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:48:23.778846 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 23:48:23.781233 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 23:48:23.848095 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:48:23.850936 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:48:23.887251 systemd-networkd[802]: lo: Link UP Sep 12 23:48:23.887265 systemd-networkd[802]: lo: Gained carrier Sep 12 23:48:23.888018 systemd-networkd[802]: Enumeration completed Sep 12 23:48:23.888448 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:48:23.888451 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:48:23.889433 ignition[696]: Ignition 2.21.0 Sep 12 23:48:23.889116 systemd-networkd[802]: eth0: Link UP Sep 12 23:48:23.889440 ignition[696]: Stage: fetch-offline Sep 12 23:48:23.889264 systemd-networkd[802]: eth0: Gained carrier Sep 12 23:48:23.889473 ignition[696]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:23.889273 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:48:23.889481 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:23.891516 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:48:23.889640 ignition[696]: parsed url from cmdline: "" Sep 12 23:48:23.892886 systemd[1]: Reached target network.target - Network. Sep 12 23:48:23.889643 ignition[696]: no config URL provided Sep 12 23:48:23.889647 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 23:48:23.889654 ignition[696]: no config at "/usr/lib/ignition/user.ign" Sep 12 23:48:23.889681 ignition[696]: op(1): [started] loading QEMU firmware config module Sep 12 23:48:23.889685 ignition[696]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 23:48:23.895041 ignition[696]: op(1): [finished] loading QEMU firmware config module Sep 12 23:48:23.914420 systemd-networkd[802]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:48:23.944166 ignition[696]: parsing config with SHA512: 69977b237b3eb7a7006bc0d602d0bcf112d1a97b4bbc28478d56e8de99b33f7822c63696a67ddb5eb6c3696d793562110f510f73fb349289b9bee2ae6d280925 Sep 12 23:48:23.948233 unknown[696]: fetched base config from "system" Sep 12 23:48:23.948245 unknown[696]: fetched user config from "qemu" Sep 12 23:48:23.948646 ignition[696]: fetch-offline: fetch-offline passed Sep 12 23:48:23.950644 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:48:23.948724 ignition[696]: Ignition finished successfully Sep 12 23:48:23.951868 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 23:48:23.952658 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 23:48:23.978794 ignition[810]: Ignition 2.21.0 Sep 12 23:48:23.978809 ignition[810]: Stage: kargs Sep 12 23:48:23.978935 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:23.978943 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:23.981457 ignition[810]: kargs: kargs passed Sep 12 23:48:23.981758 ignition[810]: Ignition finished successfully Sep 12 23:48:23.984524 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 23:48:23.986209 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 23:48:24.010367 ignition[818]: Ignition 2.21.0 Sep 12 23:48:24.010403 ignition[818]: Stage: disks Sep 12 23:48:24.010555 ignition[818]: no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:24.010564 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:24.012053 ignition[818]: disks: disks passed Sep 12 23:48:24.012112 ignition[818]: Ignition finished successfully Sep 12 23:48:24.016358 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 23:48:24.017305 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 23:48:24.018526 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 23:48:24.020186 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:48:24.021757 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:48:24.023004 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:48:24.025109 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 23:48:24.046586 systemd-resolved[288]: Detected conflict on linux IN A 10.0.0.101 Sep 12 23:48:24.046599 systemd-resolved[288]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. Sep 12 23:48:24.048889 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 23:48:24.073125 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 23:48:24.075588 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 23:48:24.134344 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 23:48:24.135516 kernel: EXT4-fs (vda9): mounted filesystem 01c463ed-b282-4a97-bc2e-d1c81f25bb05 r/w with ordered data mode. Quota mode: none. Sep 12 23:48:24.135405 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 23:48:24.137309 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:48:24.138842 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 23:48:24.139610 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 23:48:24.139647 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 23:48:24.139677 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:48:24.151800 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 23:48:24.153984 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 23:48:24.158153 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Sep 12 23:48:24.158173 kernel: BTRFS info (device vda6): first mount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:48:24.158183 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:48:24.160388 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:48:24.160406 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:48:24.161276 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:48:24.200932 initrd-setup-root[861]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 23:48:24.203855 initrd-setup-root[868]: cut: /sysroot/etc/group: No such file or directory Sep 12 23:48:24.206737 initrd-setup-root[875]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 23:48:24.209632 initrd-setup-root[882]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 23:48:24.274341 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 23:48:24.276987 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 23:48:24.279131 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 23:48:24.294483 kernel: BTRFS info (device vda6): last unmount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:48:24.303643 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 23:48:24.310484 ignition[950]: INFO : Ignition 2.21.0 Sep 12 23:48:24.310484 ignition[950]: INFO : Stage: mount Sep 12 23:48:24.311672 ignition[950]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:24.311672 ignition[950]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:24.313227 ignition[950]: INFO : mount: mount passed Sep 12 23:48:24.313227 ignition[950]: INFO : Ignition finished successfully Sep 12 23:48:24.313955 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 23:48:24.316403 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 23:48:24.864437 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 23:48:24.865922 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 23:48:24.884611 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (962) Sep 12 23:48:24.884643 kernel: BTRFS info (device vda6): first mount of filesystem d14b678d-b2cf-466a-9c6e-b6d9277deb1d Sep 12 23:48:24.884653 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 23:48:24.887481 kernel: BTRFS info (device vda6): turning on async discard Sep 12 23:48:24.887502 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 23:48:24.888741 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 23:48:24.918352 ignition[979]: INFO : Ignition 2.21.0 Sep 12 23:48:24.918352 ignition[979]: INFO : Stage: files Sep 12 23:48:24.920528 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:24.920528 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:24.920528 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Sep 12 23:48:24.920528 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 23:48:24.920528 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 23:48:24.925311 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 23:48:24.925311 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 23:48:24.925311 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 23:48:24.925311 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 23:48:24.925311 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 23:48:24.922796 unknown[979]: wrote ssh authorized keys file for user: core Sep 12 23:48:25.218130 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 23:48:25.620474 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 23:48:25.620474 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 23:48:25.623609 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 23:48:25.623609 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:48:25.623609 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 23:48:25.623609 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:48:25.623609 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 23:48:25.623609 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:48:25.623609 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 23:48:25.623609 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:48:25.623609 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 23:48:25.623609 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:48:25.636748 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:48:25.636748 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:48:25.636748 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 23:48:25.732515 systemd-networkd[802]: eth0: Gained IPv6LL Sep 12 23:48:26.086497 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 23:48:26.498009 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 23:48:26.498009 ignition[979]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 23:48:26.501394 ignition[979]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:48:26.501394 ignition[979]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 23:48:26.501394 ignition[979]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 23:48:26.501394 ignition[979]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 12 23:48:26.501394 ignition[979]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:48:26.501394 ignition[979]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 23:48:26.501394 ignition[979]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 12 23:48:26.501394 ignition[979]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 23:48:26.515184 ignition[979]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:48:26.518618 ignition[979]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 23:48:26.519927 ignition[979]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 23:48:26.519927 ignition[979]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 12 23:48:26.519927 ignition[979]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 23:48:26.519927 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:48:26.519927 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 23:48:26.519927 ignition[979]: INFO : files: files passed Sep 12 23:48:26.519927 ignition[979]: INFO : Ignition finished successfully Sep 12 23:48:26.520862 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 23:48:26.523271 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 23:48:26.524876 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 23:48:26.537291 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 23:48:26.538344 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 23:48:26.538625 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 23:48:26.542269 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:48:26.542269 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:48:26.546947 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 23:48:26.543018 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:48:26.544921 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 23:48:26.547522 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 23:48:26.588417 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 23:48:26.588527 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 23:48:26.590739 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 23:48:26.592167 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 23:48:26.593823 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 23:48:26.594684 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 23:48:26.608572 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:48:26.610880 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 23:48:26.631447 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:48:26.632523 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:48:26.634259 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 23:48:26.635822 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 23:48:26.635958 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 23:48:26.638077 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 23:48:26.639758 systemd[1]: Stopped target basic.target - Basic System. Sep 12 23:48:26.641176 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 23:48:26.642618 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 23:48:26.644194 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 23:48:26.645797 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 23:48:26.647322 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 23:48:26.648943 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 23:48:26.650496 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 23:48:26.652013 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 23:48:26.653289 systemd[1]: Stopped target swap.target - Swaps. Sep 12 23:48:26.654470 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 23:48:26.654601 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 23:48:26.656343 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:48:26.657910 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:48:26.659385 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 23:48:26.659493 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:48:26.661101 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 23:48:26.661214 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 23:48:26.663442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 23:48:26.663554 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 23:48:26.664983 systemd[1]: Stopped target paths.target - Path Units. Sep 12 23:48:26.666223 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 23:48:26.669450 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:48:26.670439 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 23:48:26.672323 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 23:48:26.673702 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 23:48:26.673787 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 23:48:26.675211 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 23:48:26.675283 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 23:48:26.676611 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 23:48:26.676733 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 23:48:26.678378 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 23:48:26.678476 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 23:48:26.680577 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 23:48:26.682809 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 23:48:26.683839 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 23:48:26.683947 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:48:26.685529 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 23:48:26.685629 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 23:48:26.690485 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 23:48:26.695539 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 23:48:26.704234 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 23:48:26.708971 ignition[1034]: INFO : Ignition 2.21.0 Sep 12 23:48:26.708971 ignition[1034]: INFO : Stage: umount Sep 12 23:48:26.711015 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 23:48:26.711015 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 23:48:26.711015 ignition[1034]: INFO : umount: umount passed Sep 12 23:48:26.711015 ignition[1034]: INFO : Ignition finished successfully Sep 12 23:48:26.713093 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 23:48:26.713191 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 23:48:26.716527 systemd[1]: Stopped target network.target - Network. Sep 12 23:48:26.717501 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 23:48:26.717563 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 23:48:26.719052 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 23:48:26.719096 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 23:48:26.720106 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 23:48:26.720156 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 23:48:26.721136 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 23:48:26.721176 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 23:48:26.722827 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 23:48:26.724318 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 23:48:26.730433 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 23:48:26.730542 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 23:48:26.734473 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 23:48:26.734722 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 23:48:26.734760 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:48:26.738009 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 23:48:26.740712 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 23:48:26.740812 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 23:48:26.743749 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 23:48:26.744139 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 23:48:26.745603 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 23:48:26.745642 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:48:26.748192 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 23:48:26.749848 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 23:48:26.749903 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 23:48:26.752350 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 23:48:26.752416 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:48:26.755591 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 23:48:26.755637 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 23:48:26.757346 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:48:26.764214 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 23:48:26.765502 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 23:48:26.765590 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 23:48:26.766756 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 23:48:26.766834 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 23:48:26.782139 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 23:48:26.782277 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:48:26.784257 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 23:48:26.784292 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 23:48:26.785925 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 23:48:26.785950 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:48:26.787645 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 23:48:26.787705 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 23:48:26.790251 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 23:48:26.790300 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 23:48:26.792668 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 23:48:26.792721 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 23:48:26.795976 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 23:48:26.796952 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 23:48:26.797023 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:48:26.799649 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 23:48:26.799705 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:48:26.802199 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 23:48:26.802240 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:48:26.805221 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 23:48:26.805264 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:48:26.807322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 23:48:26.807366 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:48:26.810848 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 23:48:26.811498 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 23:48:26.816861 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 23:48:26.816955 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 23:48:26.818877 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 23:48:26.820980 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 23:48:26.850493 systemd[1]: Switching root. Sep 12 23:48:26.881610 systemd-journald[245]: Journal stopped Sep 12 23:48:27.623456 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 12 23:48:27.623510 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 23:48:27.623526 kernel: SELinux: policy capability open_perms=1 Sep 12 23:48:27.623549 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 23:48:27.623562 kernel: SELinux: policy capability always_check_network=0 Sep 12 23:48:27.623571 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 23:48:27.623582 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 23:48:27.623591 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 23:48:27.623600 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 23:48:27.623609 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 23:48:27.623619 kernel: audit: type=1403 audit(1757720907.069:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 23:48:27.623630 systemd[1]: Successfully loaded SELinux policy in 46.643ms. Sep 12 23:48:27.623666 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.356ms. Sep 12 23:48:27.623679 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 23:48:27.623690 systemd[1]: Detected virtualization kvm. Sep 12 23:48:27.623701 systemd[1]: Detected architecture arm64. Sep 12 23:48:27.623712 systemd[1]: Detected first boot. Sep 12 23:48:27.623726 systemd[1]: Initializing machine ID from VM UUID. Sep 12 23:48:27.623737 zram_generator::config[1082]: No configuration found. Sep 12 23:48:27.623748 kernel: NET: Registered PF_VSOCK protocol family Sep 12 23:48:27.623759 systemd[1]: Populated /etc with preset unit settings. Sep 12 23:48:27.623771 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 23:48:27.623781 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 23:48:27.623793 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 23:48:27.623803 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 23:48:27.623813 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 23:48:27.623824 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 23:48:27.623834 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 23:48:27.623844 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 23:48:27.623854 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 23:48:27.623865 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 23:48:27.623884 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 23:48:27.623896 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 23:48:27.623916 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 23:48:27.623934 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 23:48:27.623952 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 23:48:27.623962 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 23:48:27.623973 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 23:48:27.623988 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 23:48:27.623998 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 23:48:27.624009 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 23:48:27.624021 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 23:48:27.624031 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 23:48:27.624041 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 23:48:27.624051 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 23:48:27.624061 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 23:48:27.624071 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 23:48:27.624081 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 23:48:27.624092 systemd[1]: Reached target slices.target - Slice Units. Sep 12 23:48:27.624103 systemd[1]: Reached target swap.target - Swaps. Sep 12 23:48:27.624114 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 23:48:27.624124 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 23:48:27.624134 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 23:48:27.624144 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 23:48:27.624154 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 23:48:27.624164 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 23:48:27.624175 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 23:48:27.624184 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 23:48:27.624195 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 23:48:27.624207 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 23:48:27.624217 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 23:48:27.624227 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 23:48:27.624237 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 23:48:27.624247 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 23:48:27.624257 systemd[1]: Reached target machines.target - Containers. Sep 12 23:48:27.624268 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 23:48:27.624280 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:48:27.624290 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 23:48:27.624301 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 23:48:27.624312 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:48:27.624323 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:48:27.624333 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:48:27.624344 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 23:48:27.624354 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:48:27.624364 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 23:48:27.624385 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 23:48:27.624396 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 23:48:27.624406 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 23:48:27.624416 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 23:48:27.624427 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:48:27.624439 kernel: fuse: init (API version 7.41) Sep 12 23:48:27.624448 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 23:48:27.624459 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 23:48:27.624470 kernel: loop: module loaded Sep 12 23:48:27.624480 kernel: ACPI: bus type drm_connector registered Sep 12 23:48:27.624490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 23:48:27.624500 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 23:48:27.624510 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 23:48:27.624541 systemd-journald[1150]: Collecting audit messages is disabled. Sep 12 23:48:27.624565 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 23:48:27.624577 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 23:48:27.624588 systemd[1]: Stopped verity-setup.service. Sep 12 23:48:27.624599 systemd-journald[1150]: Journal started Sep 12 23:48:27.624619 systemd-journald[1150]: Runtime Journal (/run/log/journal/cd434354324f4e3999b87a0c5795a662) is 6M, max 48.5M, 42.4M free. Sep 12 23:48:27.442473 systemd[1]: Queued start job for default target multi-user.target. Sep 12 23:48:27.453411 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 23:48:27.453821 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 23:48:27.629931 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 23:48:27.631297 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 23:48:27.632445 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 23:48:27.633444 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 23:48:27.634327 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 23:48:27.635540 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 23:48:27.636670 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 23:48:27.637697 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 23:48:27.638918 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 23:48:27.640251 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 23:48:27.640432 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 23:48:27.641587 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:48:27.641781 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:48:27.642940 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:48:27.643094 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:48:27.644197 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:48:27.644361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:48:27.645528 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 23:48:27.645700 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 23:48:27.646939 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:48:27.647091 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:48:27.648267 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 23:48:27.649512 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 23:48:27.650871 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 23:48:27.652169 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 23:48:27.664399 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 23:48:27.666632 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 23:48:27.668580 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 23:48:27.669454 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 23:48:27.669482 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 23:48:27.671114 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 23:48:27.683538 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 23:48:27.684479 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:48:27.685658 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 23:48:27.690325 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 23:48:27.691285 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:48:27.692331 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 23:48:27.693296 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:48:27.694278 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 23:48:27.711932 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 23:48:27.716531 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 23:48:27.718402 systemd-journald[1150]: Time spent on flushing to /var/log/journal/cd434354324f4e3999b87a0c5795a662 is 25.591ms for 889 entries. Sep 12 23:48:27.718402 systemd-journald[1150]: System Journal (/var/log/journal/cd434354324f4e3999b87a0c5795a662) is 8M, max 195.6M, 187.6M free. Sep 12 23:48:27.760439 systemd-journald[1150]: Received client request to flush runtime journal. Sep 12 23:48:27.760535 kernel: loop0: detected capacity change from 0 to 107312 Sep 12 23:48:27.760561 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 23:48:27.720070 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 23:48:27.722702 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 23:48:27.724629 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 23:48:27.726147 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 23:48:27.731321 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 23:48:27.734528 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 23:48:27.744874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 23:48:27.758656 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Sep 12 23:48:27.758671 systemd-tmpfiles[1199]: ACLs are not supported, ignoring. Sep 12 23:48:27.765332 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 23:48:27.767429 kernel: loop1: detected capacity change from 0 to 138376 Sep 12 23:48:27.767889 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 23:48:27.773535 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 23:48:27.774958 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 23:48:27.792405 kernel: loop2: detected capacity change from 0 to 211168 Sep 12 23:48:27.803930 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 23:48:27.808573 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 23:48:27.813459 kernel: loop3: detected capacity change from 0 to 107312 Sep 12 23:48:27.819417 kernel: loop4: detected capacity change from 0 to 138376 Sep 12 23:48:27.827393 kernel: loop5: detected capacity change from 0 to 211168 Sep 12 23:48:27.831910 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 23:48:27.832737 (sd-merge)[1221]: Merged extensions into '/usr'. Sep 12 23:48:27.834175 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 12 23:48:27.834197 systemd-tmpfiles[1220]: ACLs are not supported, ignoring. Sep 12 23:48:27.836694 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 23:48:27.836807 systemd[1]: Reloading... Sep 12 23:48:27.892406 zram_generator::config[1248]: No configuration found. Sep 12 23:48:27.973254 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 23:48:27.986733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:48:28.050701 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 23:48:28.051141 systemd[1]: Reloading finished in 213 ms. Sep 12 23:48:28.083465 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 23:48:28.084741 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 23:48:28.085983 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 23:48:28.101590 systemd[1]: Starting ensure-sysext.service... Sep 12 23:48:28.103223 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 23:48:28.112700 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Sep 12 23:48:28.112714 systemd[1]: Reloading... Sep 12 23:48:28.121637 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 23:48:28.121680 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 23:48:28.121866 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 23:48:28.122014 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 23:48:28.122594 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 23:48:28.122793 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 12 23:48:28.122840 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Sep 12 23:48:28.125465 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:48:28.125478 systemd-tmpfiles[1286]: Skipping /boot Sep 12 23:48:28.134847 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 23:48:28.134859 systemd-tmpfiles[1286]: Skipping /boot Sep 12 23:48:28.157508 zram_generator::config[1313]: No configuration found. Sep 12 23:48:28.226211 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:48:28.288158 systemd[1]: Reloading finished in 175 ms. Sep 12 23:48:28.300448 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 23:48:28.305884 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 23:48:28.316480 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:48:28.318597 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 23:48:28.320441 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 23:48:28.322821 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 23:48:28.325672 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 23:48:28.329513 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 23:48:28.334016 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:48:28.335067 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:48:28.340544 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:48:28.343812 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:48:28.344797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:48:28.344919 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:48:28.347455 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 23:48:28.355029 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:48:28.355220 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:48:28.355362 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:48:28.356824 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 23:48:28.363100 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 23:48:28.365318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:48:28.367641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:48:28.369247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:48:28.369414 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:48:28.370999 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:48:28.371159 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:48:28.372746 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 23:48:28.374264 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 23:48:28.377327 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Sep 12 23:48:28.378379 augenrules[1380]: No rules Sep 12 23:48:28.387744 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 23:48:28.389327 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:48:28.389554 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:48:28.396982 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 23:48:28.405176 systemd[1]: Finished ensure-sysext.service. Sep 12 23:48:28.411591 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:48:28.412406 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 23:48:28.413980 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 23:48:28.433947 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 23:48:28.436498 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 23:48:28.439080 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 23:48:28.441585 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 23:48:28.441633 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 23:48:28.445530 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 23:48:28.449684 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 23:48:28.450899 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 23:48:28.451426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 23:48:28.451610 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 23:48:28.453561 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 23:48:28.453748 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 23:48:28.455238 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 23:48:28.456039 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 23:48:28.461094 augenrules[1420]: /sbin/augenrules: No change Sep 12 23:48:28.461156 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 23:48:28.461296 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 23:48:28.468604 augenrules[1452]: No rules Sep 12 23:48:28.469363 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:48:28.469680 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:48:28.483287 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 23:48:28.483343 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 23:48:28.484202 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 23:48:28.519707 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 23:48:28.532707 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 23:48:28.534989 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 23:48:28.551426 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 23:48:28.592321 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 23:48:28.593573 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 23:48:28.598154 systemd-networkd[1430]: lo: Link UP Sep 12 23:48:28.598162 systemd-networkd[1430]: lo: Gained carrier Sep 12 23:48:28.598998 systemd-networkd[1430]: Enumeration completed Sep 12 23:48:28.599084 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 23:48:28.599396 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:48:28.599400 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 23:48:28.599966 systemd-networkd[1430]: eth0: Link UP Sep 12 23:48:28.600070 systemd-networkd[1430]: eth0: Gained carrier Sep 12 23:48:28.600084 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 23:48:28.604531 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 23:48:28.605540 systemd-resolved[1352]: Positive Trust Anchors: Sep 12 23:48:28.605551 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 23:48:28.605582 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 23:48:28.608699 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 23:48:28.614869 systemd-resolved[1352]: Defaulting to hostname 'linux'. Sep 12 23:48:28.616548 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 23:48:28.617445 systemd[1]: Reached target network.target - Network. Sep 12 23:48:28.617453 systemd-networkd[1430]: eth0: DHCPv4 address 10.0.0.101/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 23:48:28.618153 systemd-timesyncd[1433]: Network configuration changed, trying to establish connection. Sep 12 23:48:28.618356 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 23:48:28.199608 systemd-journald[1150]: Time jumped backwards, rotating. Sep 12 23:48:28.619750 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 23:48:28.189253 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 23:48:28.190452 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 23:48:28.192425 systemd-resolved[1352]: Clock change detected. Flushing caches. Sep 12 23:48:28.192483 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 23:48:28.193317 systemd-timesyncd[1433]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 23:48:28.193365 systemd-timesyncd[1433]: Initial clock synchronization to Fri 2025-09-12 23:48:28.188359 UTC. Sep 12 23:48:28.193525 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 23:48:28.195321 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 23:48:28.197372 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 23:48:28.197399 systemd[1]: Reached target paths.target - Path Units. Sep 12 23:48:28.198102 systemd[1]: Reached target timers.target - Timer Units. Sep 12 23:48:28.201135 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 23:48:28.203422 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 23:48:28.207849 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 23:48:28.209454 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 23:48:28.210540 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 23:48:28.219253 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 23:48:28.220635 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 23:48:28.222986 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 23:48:28.224624 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 23:48:28.233904 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 23:48:28.235026 systemd[1]: Reached target basic.target - Basic System. Sep 12 23:48:28.236594 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:48:28.236627 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 23:48:28.239269 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 23:48:28.241065 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 23:48:28.242912 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 23:48:28.250976 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 23:48:28.252822 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 23:48:28.253669 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 23:48:28.254668 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 23:48:28.257481 jq[1497]: false Sep 12 23:48:28.257791 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 23:48:28.261270 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 23:48:28.263135 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 23:48:28.267259 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 23:48:28.269346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 23:48:28.271040 extend-filesystems[1498]: Found /dev/vda6 Sep 12 23:48:28.271040 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 23:48:28.271503 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 23:48:28.273797 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 23:48:28.275359 extend-filesystems[1498]: Found /dev/vda9 Sep 12 23:48:28.276332 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 23:48:28.278019 extend-filesystems[1498]: Checking size of /dev/vda9 Sep 12 23:48:28.281178 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 23:48:28.282928 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 23:48:28.286433 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 23:48:28.286716 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 23:48:28.286876 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 23:48:28.289006 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 23:48:28.289207 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 23:48:28.299365 jq[1515]: true Sep 12 23:48:28.304689 extend-filesystems[1498]: Resized partition /dev/vda9 Sep 12 23:48:28.304764 (ntainerd)[1530]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 23:48:28.308508 extend-filesystems[1535]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 23:48:28.315521 update_engine[1514]: I20250912 23:48:28.313720 1514 main.cc:92] Flatcar Update Engine starting Sep 12 23:48:28.316168 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 23:48:28.330147 tar[1523]: linux-arm64/LICENSE Sep 12 23:48:28.330147 tar[1523]: linux-arm64/helm Sep 12 23:48:28.338364 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 23:48:28.350888 jq[1538]: true Sep 12 23:48:28.356088 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 23:48:28.356545 extend-filesystems[1535]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 23:48:28.356545 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 23:48:28.356545 extend-filesystems[1535]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 23:48:28.367924 extend-filesystems[1498]: Resized filesystem in /dev/vda9 Sep 12 23:48:28.365476 dbus-daemon[1495]: [system] SELinux support is enabled Sep 12 23:48:28.357723 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 23:48:28.371375 update_engine[1514]: I20250912 23:48:28.370980 1514 update_check_scheduler.cc:74] Next update check in 6m12s Sep 12 23:48:28.370814 systemd-logind[1508]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 23:48:28.371928 systemd-logind[1508]: New seat seat0. Sep 12 23:48:28.388665 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 23:48:28.392159 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 23:48:28.393778 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 23:48:28.403478 dbus-daemon[1495]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 23:48:28.405354 systemd[1]: Started update-engine.service - Update Engine. Sep 12 23:48:28.407553 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 23:48:28.408319 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 23:48:28.409363 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 23:48:28.409476 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 23:48:28.418404 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 23:48:28.425642 bash[1562]: Updated "/home/core/.ssh/authorized_keys" Sep 12 23:48:28.434169 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 23:48:28.435626 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 23:48:28.480579 locksmithd[1567]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 23:48:28.493820 containerd[1530]: time="2025-09-12T23:48:28Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 23:48:28.496612 containerd[1530]: time="2025-09-12T23:48:28.496564713Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 12 23:48:28.506179 containerd[1530]: time="2025-09-12T23:48:28.506123433Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.52µs" Sep 12 23:48:28.506179 containerd[1530]: time="2025-09-12T23:48:28.506168553Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 23:48:28.506275 containerd[1530]: time="2025-09-12T23:48:28.506188593Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 23:48:28.506374 containerd[1530]: time="2025-09-12T23:48:28.506340753Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 23:48:28.506374 containerd[1530]: time="2025-09-12T23:48:28.506366553Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 23:48:28.506417 containerd[1530]: time="2025-09-12T23:48:28.506393073Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:48:28.506478 containerd[1530]: time="2025-09-12T23:48:28.506461313Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 23:48:28.506497 containerd[1530]: time="2025-09-12T23:48:28.506477753Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:48:28.506726 containerd[1530]: time="2025-09-12T23:48:28.506698593Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 23:48:28.506726 containerd[1530]: time="2025-09-12T23:48:28.506720473Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:48:28.506760 containerd[1530]: time="2025-09-12T23:48:28.506731833Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 23:48:28.506760 containerd[1530]: time="2025-09-12T23:48:28.506739913Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 23:48:28.506819 containerd[1530]: time="2025-09-12T23:48:28.506805393Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 23:48:28.507002 containerd[1530]: time="2025-09-12T23:48:28.506984673Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:48:28.507032 containerd[1530]: time="2025-09-12T23:48:28.507019153Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 23:48:28.507050 containerd[1530]: time="2025-09-12T23:48:28.507032753Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 23:48:28.507074 containerd[1530]: time="2025-09-12T23:48:28.507064353Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 23:48:28.507682 containerd[1530]: time="2025-09-12T23:48:28.507646953Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 23:48:28.507779 containerd[1530]: time="2025-09-12T23:48:28.507739433Z" level=info msg="metadata content store policy set" policy=shared Sep 12 23:48:28.511739 containerd[1530]: time="2025-09-12T23:48:28.511708273Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 23:48:28.511819 containerd[1530]: time="2025-09-12T23:48:28.511758193Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 23:48:28.511819 containerd[1530]: time="2025-09-12T23:48:28.511773233Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 23:48:28.511819 containerd[1530]: time="2025-09-12T23:48:28.511784793Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 23:48:28.511819 containerd[1530]: time="2025-09-12T23:48:28.511796313Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 23:48:28.511819 containerd[1530]: time="2025-09-12T23:48:28.511811873Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 23:48:28.511913 containerd[1530]: time="2025-09-12T23:48:28.511823673Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 23:48:28.511913 containerd[1530]: time="2025-09-12T23:48:28.511834633Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 23:48:28.511913 containerd[1530]: time="2025-09-12T23:48:28.511845193Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 23:48:28.511913 containerd[1530]: time="2025-09-12T23:48:28.511855113Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 23:48:28.511913 containerd[1530]: time="2025-09-12T23:48:28.511864513Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 23:48:28.511913 containerd[1530]: time="2025-09-12T23:48:28.511876233Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 23:48:28.512005 containerd[1530]: time="2025-09-12T23:48:28.511984953Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 23:48:28.512021 containerd[1530]: time="2025-09-12T23:48:28.512003593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 23:48:28.512021 containerd[1530]: time="2025-09-12T23:48:28.512018033Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 23:48:28.512053 containerd[1530]: time="2025-09-12T23:48:28.512028953Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 23:48:28.512053 containerd[1530]: time="2025-09-12T23:48:28.512045473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 23:48:28.512086 containerd[1530]: time="2025-09-12T23:48:28.512059073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 23:48:28.512086 containerd[1530]: time="2025-09-12T23:48:28.512071313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 23:48:28.512086 containerd[1530]: time="2025-09-12T23:48:28.512080713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 23:48:28.512158 containerd[1530]: time="2025-09-12T23:48:28.512092553Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 23:48:28.512158 containerd[1530]: time="2025-09-12T23:48:28.512108153Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 23:48:28.512158 containerd[1530]: time="2025-09-12T23:48:28.512117793Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 23:48:28.512347 containerd[1530]: time="2025-09-12T23:48:28.512327353Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 23:48:28.512370 containerd[1530]: time="2025-09-12T23:48:28.512355233Z" level=info msg="Start snapshots syncer" Sep 12 23:48:28.512421 containerd[1530]: time="2025-09-12T23:48:28.512381873Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 23:48:28.512632 containerd[1530]: time="2025-09-12T23:48:28.512593913Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 23:48:28.512729 containerd[1530]: time="2025-09-12T23:48:28.512650353Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 23:48:28.512750 containerd[1530]: time="2025-09-12T23:48:28.512725793Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 23:48:28.512861 containerd[1530]: time="2025-09-12T23:48:28.512840673Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 23:48:28.512938 containerd[1530]: time="2025-09-12T23:48:28.512871353Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 23:48:28.512938 containerd[1530]: time="2025-09-12T23:48:28.512887313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 23:48:28.512938 containerd[1530]: time="2025-09-12T23:48:28.512897513Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 23:48:28.512938 containerd[1530]: time="2025-09-12T23:48:28.512909473Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 23:48:28.512938 containerd[1530]: time="2025-09-12T23:48:28.512919513Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 23:48:28.512938 containerd[1530]: time="2025-09-12T23:48:28.512929673Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 23:48:28.513027 containerd[1530]: time="2025-09-12T23:48:28.512954153Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 23:48:28.513027 containerd[1530]: time="2025-09-12T23:48:28.512965713Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 23:48:28.513027 containerd[1530]: time="2025-09-12T23:48:28.512976753Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 23:48:28.513027 containerd[1530]: time="2025-09-12T23:48:28.513006433Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:48:28.513027 containerd[1530]: time="2025-09-12T23:48:28.513019073Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 23:48:28.513027 containerd[1530]: time="2025-09-12T23:48:28.513027433Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:48:28.513117 containerd[1530]: time="2025-09-12T23:48:28.513038313Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 23:48:28.513117 containerd[1530]: time="2025-09-12T23:48:28.513046433Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 23:48:28.513117 containerd[1530]: time="2025-09-12T23:48:28.513055593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 23:48:28.513117 containerd[1530]: time="2025-09-12T23:48:28.513065993Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 23:48:28.513200 containerd[1530]: time="2025-09-12T23:48:28.513175433Z" level=info msg="runtime interface created" Sep 12 23:48:28.513200 containerd[1530]: time="2025-09-12T23:48:28.513183233Z" level=info msg="created NRI interface" Sep 12 23:48:28.513200 containerd[1530]: time="2025-09-12T23:48:28.513194073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 23:48:28.513245 containerd[1530]: time="2025-09-12T23:48:28.513205433Z" level=info msg="Connect containerd service" Sep 12 23:48:28.513245 containerd[1530]: time="2025-09-12T23:48:28.513231273Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 23:48:28.514080 containerd[1530]: time="2025-09-12T23:48:28.514054833Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 23:48:28.601167 containerd[1530]: time="2025-09-12T23:48:28.600866593Z" level=info msg="Start subscribing containerd event" Sep 12 23:48:28.601167 containerd[1530]: time="2025-09-12T23:48:28.600944873Z" level=info msg="Start recovering state" Sep 12 23:48:28.601167 containerd[1530]: time="2025-09-12T23:48:28.601028193Z" level=info msg="Start event monitor" Sep 12 23:48:28.601167 containerd[1530]: time="2025-09-12T23:48:28.601055073Z" level=info msg="Start cni network conf syncer for default" Sep 12 23:48:28.601167 containerd[1530]: time="2025-09-12T23:48:28.601068193Z" level=info msg="Start streaming server" Sep 12 23:48:28.601167 containerd[1530]: time="2025-09-12T23:48:28.601079793Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 23:48:28.601167 containerd[1530]: time="2025-09-12T23:48:28.601087033Z" level=info msg="runtime interface starting up..." Sep 12 23:48:28.601167 containerd[1530]: time="2025-09-12T23:48:28.601092873Z" level=info msg="starting plugins..." Sep 12 23:48:28.601167 containerd[1530]: time="2025-09-12T23:48:28.601105113Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 23:48:28.601167 containerd[1530]: time="2025-09-12T23:48:28.601106553Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 23:48:28.601403 containerd[1530]: time="2025-09-12T23:48:28.601379313Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 23:48:28.603381 containerd[1530]: time="2025-09-12T23:48:28.602321473Z" level=info msg="containerd successfully booted in 0.108960s" Sep 12 23:48:28.602455 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 23:48:28.750014 tar[1523]: linux-arm64/README.md Sep 12 23:48:28.776180 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 23:48:29.066868 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 23:48:29.085909 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 23:48:29.090661 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 23:48:29.116755 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 23:48:29.118184 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 23:48:29.120568 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 23:48:29.154438 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 23:48:29.156951 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 23:48:29.158886 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 23:48:29.159998 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 23:48:30.036322 systemd-networkd[1430]: eth0: Gained IPv6LL Sep 12 23:48:30.038691 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 23:48:30.040109 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 23:48:30.042286 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 23:48:30.044395 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:30.046203 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 23:48:30.066094 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 23:48:30.066721 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 23:48:30.068107 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 23:48:30.069782 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 23:48:30.629387 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:30.630714 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 23:48:30.632276 systemd[1]: Startup finished in 2.005s (kernel) + 5.459s (initrd) + 4.042s (userspace) = 11.507s. Sep 12 23:48:30.633191 (kubelet)[1636]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:48:30.986094 kubelet[1636]: E0912 23:48:30.985996 1636 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:48:30.988868 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:48:30.989007 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:48:30.989352 systemd[1]: kubelet.service: Consumed 759ms CPU time, 259.9M memory peak. Sep 12 23:48:33.781572 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 23:48:33.782740 systemd[1]: Started sshd@0-10.0.0.101:22-10.0.0.1:50604.service - OpenSSH per-connection server daemon (10.0.0.1:50604). Sep 12 23:48:33.876544 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 50604 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:33.878164 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:33.884321 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 23:48:33.885529 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 23:48:33.892245 systemd-logind[1508]: New session 1 of user core. Sep 12 23:48:33.912778 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 23:48:33.917437 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 23:48:33.928097 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 23:48:33.930829 systemd-logind[1508]: New session c1 of user core. Sep 12 23:48:34.036388 systemd[1653]: Queued start job for default target default.target. Sep 12 23:48:34.050052 systemd[1653]: Created slice app.slice - User Application Slice. Sep 12 23:48:34.050081 systemd[1653]: Reached target paths.target - Paths. Sep 12 23:48:34.050117 systemd[1653]: Reached target timers.target - Timers. Sep 12 23:48:34.051349 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 23:48:34.060073 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 23:48:34.060133 systemd[1653]: Reached target sockets.target - Sockets. Sep 12 23:48:34.060199 systemd[1653]: Reached target basic.target - Basic System. Sep 12 23:48:34.060227 systemd[1653]: Reached target default.target - Main User Target. Sep 12 23:48:34.060251 systemd[1653]: Startup finished in 123ms. Sep 12 23:48:34.060465 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 23:48:34.061832 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 23:48:34.120451 systemd[1]: Started sshd@1-10.0.0.101:22-10.0.0.1:50606.service - OpenSSH per-connection server daemon (10.0.0.1:50606). Sep 12 23:48:34.177496 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 50606 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:34.179103 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:34.183712 systemd-logind[1508]: New session 2 of user core. Sep 12 23:48:34.196317 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 23:48:34.249436 sshd[1667]: Connection closed by 10.0.0.1 port 50606 Sep 12 23:48:34.249787 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 12 23:48:34.265582 systemd[1]: sshd@1-10.0.0.101:22-10.0.0.1:50606.service: Deactivated successfully. Sep 12 23:48:34.267287 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 23:48:34.268062 systemd-logind[1508]: Session 2 logged out. Waiting for processes to exit. Sep 12 23:48:34.270444 systemd[1]: Started sshd@2-10.0.0.101:22-10.0.0.1:50616.service - OpenSSH per-connection server daemon (10.0.0.1:50616). Sep 12 23:48:34.271263 systemd-logind[1508]: Removed session 2. Sep 12 23:48:34.328253 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 50616 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:34.329406 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:34.333846 systemd-logind[1508]: New session 3 of user core. Sep 12 23:48:34.349313 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 23:48:34.397901 sshd[1675]: Connection closed by 10.0.0.1 port 50616 Sep 12 23:48:34.398217 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Sep 12 23:48:34.407207 systemd[1]: sshd@2-10.0.0.101:22-10.0.0.1:50616.service: Deactivated successfully. Sep 12 23:48:34.408703 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 23:48:34.411469 systemd-logind[1508]: Session 3 logged out. Waiting for processes to exit. Sep 12 23:48:34.412563 systemd[1]: Started sshd@3-10.0.0.101:22-10.0.0.1:50618.service - OpenSSH per-connection server daemon (10.0.0.1:50618). Sep 12 23:48:34.413498 systemd-logind[1508]: Removed session 3. Sep 12 23:48:34.471734 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 50618 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:34.473044 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:34.477226 systemd-logind[1508]: New session 4 of user core. Sep 12 23:48:34.486347 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 23:48:34.537843 sshd[1684]: Connection closed by 10.0.0.1 port 50618 Sep 12 23:48:34.537721 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 12 23:48:34.555486 systemd[1]: sshd@3-10.0.0.101:22-10.0.0.1:50618.service: Deactivated successfully. Sep 12 23:48:34.557038 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 23:48:34.557702 systemd-logind[1508]: Session 4 logged out. Waiting for processes to exit. Sep 12 23:48:34.559967 systemd[1]: Started sshd@4-10.0.0.101:22-10.0.0.1:50624.service - OpenSSH per-connection server daemon (10.0.0.1:50624). Sep 12 23:48:34.560902 systemd-logind[1508]: Removed session 4. Sep 12 23:48:34.618125 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 50624 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:34.618988 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:34.622978 systemd-logind[1508]: New session 5 of user core. Sep 12 23:48:34.634327 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 23:48:34.691133 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 23:48:34.691451 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:48:34.708832 sudo[1694]: pam_unix(sudo:session): session closed for user root Sep 12 23:48:34.711763 sshd[1693]: Connection closed by 10.0.0.1 port 50624 Sep 12 23:48:34.710892 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Sep 12 23:48:34.720442 systemd[1]: sshd@4-10.0.0.101:22-10.0.0.1:50624.service: Deactivated successfully. Sep 12 23:48:34.722763 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 23:48:34.724470 systemd-logind[1508]: Session 5 logged out. Waiting for processes to exit. Sep 12 23:48:34.726049 systemd[1]: Started sshd@5-10.0.0.101:22-10.0.0.1:50638.service - OpenSSH per-connection server daemon (10.0.0.1:50638). Sep 12 23:48:34.727029 systemd-logind[1508]: Removed session 5. Sep 12 23:48:34.776635 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 50638 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:34.778030 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:34.782219 systemd-logind[1508]: New session 6 of user core. Sep 12 23:48:34.794320 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 23:48:34.845120 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 23:48:34.845427 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:48:34.850019 sudo[1704]: pam_unix(sudo:session): session closed for user root Sep 12 23:48:34.854987 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 23:48:34.855285 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:48:34.863724 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 23:48:34.905851 augenrules[1726]: No rules Sep 12 23:48:34.907056 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 23:48:34.908229 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 23:48:34.909340 sudo[1703]: pam_unix(sudo:session): session closed for user root Sep 12 23:48:34.910609 sshd[1702]: Connection closed by 10.0.0.1 port 50638 Sep 12 23:48:34.911070 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Sep 12 23:48:34.922195 systemd[1]: sshd@5-10.0.0.101:22-10.0.0.1:50638.service: Deactivated successfully. Sep 12 23:48:34.923667 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 23:48:34.925307 systemd-logind[1508]: Session 6 logged out. Waiting for processes to exit. Sep 12 23:48:34.927406 systemd[1]: Started sshd@6-10.0.0.101:22-10.0.0.1:50644.service - OpenSSH per-connection server daemon (10.0.0.1:50644). Sep 12 23:48:34.928364 systemd-logind[1508]: Removed session 6. Sep 12 23:48:34.982772 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 50644 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:48:34.984110 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:48:34.988212 systemd-logind[1508]: New session 7 of user core. Sep 12 23:48:34.995297 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 23:48:35.045831 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 23:48:35.046102 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 23:48:35.338025 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 23:48:35.352454 (dockerd)[1758]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 23:48:35.587149 dockerd[1758]: time="2025-09-12T23:48:35.587071273Z" level=info msg="Starting up" Sep 12 23:48:35.589434 dockerd[1758]: time="2025-09-12T23:48:35.589350953Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 23:48:35.701228 dockerd[1758]: time="2025-09-12T23:48:35.701185753Z" level=info msg="Loading containers: start." Sep 12 23:48:35.711175 kernel: Initializing XFRM netlink socket Sep 12 23:48:35.920977 systemd-networkd[1430]: docker0: Link UP Sep 12 23:48:35.925230 dockerd[1758]: time="2025-09-12T23:48:35.925192073Z" level=info msg="Loading containers: done." Sep 12 23:48:35.937765 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3932577644-merged.mount: Deactivated successfully. Sep 12 23:48:35.942865 dockerd[1758]: time="2025-09-12T23:48:35.942812833Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 23:48:35.942963 dockerd[1758]: time="2025-09-12T23:48:35.942900873Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 12 23:48:35.943012 dockerd[1758]: time="2025-09-12T23:48:35.942994833Z" level=info msg="Initializing buildkit" Sep 12 23:48:35.965276 dockerd[1758]: time="2025-09-12T23:48:35.965237713Z" level=info msg="Completed buildkit initialization" Sep 12 23:48:35.970039 dockerd[1758]: time="2025-09-12T23:48:35.969993713Z" level=info msg="Daemon has completed initialization" Sep 12 23:48:35.970235 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 23:48:35.970719 dockerd[1758]: time="2025-09-12T23:48:35.970596313Z" level=info msg="API listen on /run/docker.sock" Sep 12 23:48:36.965807 containerd[1530]: time="2025-09-12T23:48:36.965763953Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 23:48:37.573950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2920491397.mount: Deactivated successfully. Sep 12 23:48:38.635430 containerd[1530]: time="2025-09-12T23:48:38.635372273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:38.636312 containerd[1530]: time="2025-09-12T23:48:38.636268353Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 12 23:48:38.636983 containerd[1530]: time="2025-09-12T23:48:38.636938273Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:38.639939 containerd[1530]: time="2025-09-12T23:48:38.639903033Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:38.641586 containerd[1530]: time="2025-09-12T23:48:38.641443233Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.6756368s" Sep 12 23:48:38.641586 containerd[1530]: time="2025-09-12T23:48:38.641476913Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 23:48:38.642680 containerd[1530]: time="2025-09-12T23:48:38.642619593Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 23:48:39.849230 containerd[1530]: time="2025-09-12T23:48:39.849176113Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:39.849655 containerd[1530]: time="2025-09-12T23:48:39.849623553Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 12 23:48:39.850624 containerd[1530]: time="2025-09-12T23:48:39.850585313Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:39.853023 containerd[1530]: time="2025-09-12T23:48:39.852968633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:39.857073 containerd[1530]: time="2025-09-12T23:48:39.857039033Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.21425572s" Sep 12 23:48:39.857167 containerd[1530]: time="2025-09-12T23:48:39.857076393Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 23:48:39.857501 containerd[1530]: time="2025-09-12T23:48:39.857456073Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 23:48:40.978437 containerd[1530]: time="2025-09-12T23:48:40.978114713Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:40.979217 containerd[1530]: time="2025-09-12T23:48:40.979069153Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 12 23:48:40.980016 containerd[1530]: time="2025-09-12T23:48:40.979986073Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:40.982661 containerd[1530]: time="2025-09-12T23:48:40.982626273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:40.984558 containerd[1530]: time="2025-09-12T23:48:40.984366873Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.12686052s" Sep 12 23:48:40.984558 containerd[1530]: time="2025-09-12T23:48:40.984411753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 23:48:40.984864 containerd[1530]: time="2025-09-12T23:48:40.984804233Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 23:48:40.991766 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 23:48:40.993226 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:41.106580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:41.110466 (kubelet)[2041]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:48:41.145494 kubelet[2041]: E0912 23:48:41.145437 2041 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:48:41.148743 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:48:41.148878 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:48:41.149390 systemd[1]: kubelet.service: Consumed 137ms CPU time, 107.7M memory peak. Sep 12 23:48:41.983090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2495239824.mount: Deactivated successfully. Sep 12 23:48:42.364051 containerd[1530]: time="2025-09-12T23:48:42.363666313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:42.364433 containerd[1530]: time="2025-09-12T23:48:42.364392313Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 12 23:48:42.365102 containerd[1530]: time="2025-09-12T23:48:42.365073313Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:42.367496 containerd[1530]: time="2025-09-12T23:48:42.367107273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:42.367692 containerd[1530]: time="2025-09-12T23:48:42.367639553Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.3828056s" Sep 12 23:48:42.367692 containerd[1530]: time="2025-09-12T23:48:42.367663393Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 23:48:42.368226 containerd[1530]: time="2025-09-12T23:48:42.368117113Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 23:48:42.852039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004065210.mount: Deactivated successfully. Sep 12 23:48:43.614979 containerd[1530]: time="2025-09-12T23:48:43.614902993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:43.616098 containerd[1530]: time="2025-09-12T23:48:43.615869033Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 12 23:48:43.617281 containerd[1530]: time="2025-09-12T23:48:43.617223913Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:43.622287 containerd[1530]: time="2025-09-12T23:48:43.622205313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:43.624434 containerd[1530]: time="2025-09-12T23:48:43.623338633Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.25519732s" Sep 12 23:48:43.624774 containerd[1530]: time="2025-09-12T23:48:43.624456633Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 23:48:43.625086 containerd[1530]: time="2025-09-12T23:48:43.625021633Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 23:48:44.151016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1151965204.mount: Deactivated successfully. Sep 12 23:48:44.159406 containerd[1530]: time="2025-09-12T23:48:44.159324233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:48:44.160792 containerd[1530]: time="2025-09-12T23:48:44.160736233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 23:48:44.162457 containerd[1530]: time="2025-09-12T23:48:44.162355313Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:48:44.165034 containerd[1530]: time="2025-09-12T23:48:44.164814673Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 23:48:44.167022 containerd[1530]: time="2025-09-12T23:48:44.166456593Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 541.39448ms" Sep 12 23:48:44.167022 containerd[1530]: time="2025-09-12T23:48:44.166513113Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 23:48:44.168899 containerd[1530]: time="2025-09-12T23:48:44.167713073Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 23:48:44.649533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1131449267.mount: Deactivated successfully. Sep 12 23:48:46.387696 containerd[1530]: time="2025-09-12T23:48:46.387643473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:46.389059 containerd[1530]: time="2025-09-12T23:48:46.388806433Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 12 23:48:46.389851 containerd[1530]: time="2025-09-12T23:48:46.389821793Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:46.392905 containerd[1530]: time="2025-09-12T23:48:46.392879913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:48:46.394856 containerd[1530]: time="2025-09-12T23:48:46.394740513Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.22698444s" Sep 12 23:48:46.394856 containerd[1530]: time="2025-09-12T23:48:46.394773393Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 23:48:51.241820 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 23:48:51.245677 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:51.382762 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:51.386891 (kubelet)[2199]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 23:48:51.567262 kubelet[2199]: E0912 23:48:51.566020 2199 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 23:48:51.569042 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 23:48:51.569303 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 23:48:51.571221 systemd[1]: kubelet.service: Consumed 135ms CPU time, 105.6M memory peak. Sep 12 23:48:51.927070 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:51.927453 systemd[1]: kubelet.service: Consumed 135ms CPU time, 105.6M memory peak. Sep 12 23:48:51.929396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:51.949985 systemd[1]: Reload requested from client PID 2215 ('systemctl') (unit session-7.scope)... Sep 12 23:48:51.949999 systemd[1]: Reloading... Sep 12 23:48:52.033169 zram_generator::config[2260]: No configuration found. Sep 12 23:48:52.169607 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:48:52.254502 systemd[1]: Reloading finished in 304 ms. Sep 12 23:48:52.316786 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 23:48:52.316874 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 23:48:52.317127 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:52.317204 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95M memory peak. Sep 12 23:48:52.318961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:52.443369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:52.460590 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:48:52.502198 kubelet[2302]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:48:52.502198 kubelet[2302]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:48:52.502198 kubelet[2302]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:48:52.502198 kubelet[2302]: I0912 23:48:52.501889 2302 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:48:54.002643 kubelet[2302]: I0912 23:48:54.002550 2302 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 23:48:54.002983 kubelet[2302]: I0912 23:48:54.002693 2302 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:48:54.003037 kubelet[2302]: I0912 23:48:54.003018 2302 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 23:48:54.027565 kubelet[2302]: E0912 23:48:54.027470 2302 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.101:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 23:48:54.028120 kubelet[2302]: I0912 23:48:54.027932 2302 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:48:54.044597 kubelet[2302]: I0912 23:48:54.044553 2302 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:48:54.047312 kubelet[2302]: I0912 23:48:54.047279 2302 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:48:54.047621 kubelet[2302]: I0912 23:48:54.047596 2302 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:48:54.047782 kubelet[2302]: I0912 23:48:54.047622 2302 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:48:54.047896 kubelet[2302]: I0912 23:48:54.047846 2302 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:48:54.047896 kubelet[2302]: I0912 23:48:54.047855 2302 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 23:48:54.048105 kubelet[2302]: I0912 23:48:54.048040 2302 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:48:54.052487 kubelet[2302]: I0912 23:48:54.052432 2302 kubelet.go:480] "Attempting to sync node with API server" Sep 12 23:48:54.052487 kubelet[2302]: I0912 23:48:54.052467 2302 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:48:54.052487 kubelet[2302]: I0912 23:48:54.052492 2302 kubelet.go:386] "Adding apiserver pod source" Sep 12 23:48:54.053050 kubelet[2302]: I0912 23:48:54.052506 2302 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:48:54.053666 kubelet[2302]: I0912 23:48:54.053623 2302 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 12 23:48:54.054391 kubelet[2302]: I0912 23:48:54.054347 2302 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 23:48:54.054481 kubelet[2302]: W0912 23:48:54.054467 2302 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 23:48:54.061162 kubelet[2302]: I0912 23:48:54.058672 2302 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:48:54.061162 kubelet[2302]: I0912 23:48:54.058727 2302 server.go:1289] "Started kubelet" Sep 12 23:48:54.066356 kubelet[2302]: I0912 23:48:54.065568 2302 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:48:54.066356 kubelet[2302]: I0912 23:48:54.065908 2302 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:48:54.066622 kubelet[2302]: I0912 23:48:54.066453 2302 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:48:54.067700 kubelet[2302]: I0912 23:48:54.067429 2302 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:48:54.068299 kubelet[2302]: I0912 23:48:54.068128 2302 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:48:54.069514 kubelet[2302]: I0912 23:48:54.069071 2302 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:48:54.069514 kubelet[2302]: E0912 23:48:54.069187 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:54.069865 kubelet[2302]: I0912 23:48:54.069787 2302 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:48:54.069865 kubelet[2302]: I0912 23:48:54.069853 2302 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:48:54.073466 kubelet[2302]: I0912 23:48:54.072009 2302 server.go:317] "Adding debug handlers to kubelet server" Sep 12 23:48:54.073466 kubelet[2302]: E0912 23:48:54.072033 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 23:48:54.073466 kubelet[2302]: E0912 23:48:54.072404 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="200ms" Sep 12 23:48:54.073466 kubelet[2302]: E0912 23:48:54.072613 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 23:48:54.073466 kubelet[2302]: E0912 23:48:54.073400 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.101:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 23:48:54.074532 kubelet[2302]: I0912 23:48:54.074500 2302 factory.go:223] Registration of the containerd container factory successfully Sep 12 23:48:54.074532 kubelet[2302]: I0912 23:48:54.074517 2302 factory.go:223] Registration of the systemd container factory successfully Sep 12 23:48:54.074804 kubelet[2302]: I0912 23:48:54.074781 2302 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:48:54.074916 kubelet[2302]: E0912 23:48:54.073184 2302 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.101:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.101:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864ade070d619f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 23:48:54.058695153 +0000 UTC m=+1.593179361,LastTimestamp:2025-09-12 23:48:54.058695153 +0000 UTC m=+1.593179361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 23:48:54.076155 kubelet[2302]: E0912 23:48:54.075890 2302 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:48:54.086727 kubelet[2302]: I0912 23:48:54.086697 2302 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:48:54.086727 kubelet[2302]: I0912 23:48:54.086715 2302 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:48:54.086727 kubelet[2302]: I0912 23:48:54.086737 2302 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:48:54.090182 kubelet[2302]: I0912 23:48:54.090106 2302 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 23:48:54.091258 kubelet[2302]: I0912 23:48:54.091234 2302 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 23:48:54.091368 kubelet[2302]: I0912 23:48:54.091358 2302 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 23:48:54.091431 kubelet[2302]: I0912 23:48:54.091421 2302 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:48:54.091504 kubelet[2302]: I0912 23:48:54.091495 2302 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 23:48:54.091604 kubelet[2302]: E0912 23:48:54.091581 2302 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:48:54.170315 kubelet[2302]: E0912 23:48:54.170279 2302 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:54.187601 kubelet[2302]: I0912 23:48:54.187280 2302 policy_none.go:49] "None policy: Start" Sep 12 23:48:54.187601 kubelet[2302]: I0912 23:48:54.187313 2302 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:48:54.187601 kubelet[2302]: I0912 23:48:54.187332 2302 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:48:54.187946 kubelet[2302]: E0912 23:48:54.187916 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.101:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 23:48:54.191878 kubelet[2302]: E0912 23:48:54.191861 2302 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 23:48:54.194359 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 23:48:54.209866 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 23:48:54.215795 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 23:48:54.236273 kubelet[2302]: E0912 23:48:54.236175 2302 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 23:48:54.236548 kubelet[2302]: I0912 23:48:54.236399 2302 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:48:54.236548 kubelet[2302]: I0912 23:48:54.236419 2302 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:48:54.237183 kubelet[2302]: I0912 23:48:54.236875 2302 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:48:54.241839 kubelet[2302]: E0912 23:48:54.241816 2302 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:48:54.241902 kubelet[2302]: E0912 23:48:54.241863 2302 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 23:48:54.274024 kubelet[2302]: E0912 23:48:54.273677 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="400ms" Sep 12 23:48:54.339087 kubelet[2302]: I0912 23:48:54.339047 2302 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:48:54.339590 kubelet[2302]: E0912 23:48:54.339551 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Sep 12 23:48:54.415057 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 23:48:54.450382 kubelet[2302]: E0912 23:48:54.448465 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:48:54.453200 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 23:48:54.460369 kubelet[2302]: E0912 23:48:54.459310 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:48:54.472698 kubelet[2302]: I0912 23:48:54.472658 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a64b0c2e056175fe58ed5df668cc81f1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a64b0c2e056175fe58ed5df668cc81f1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:54.472698 kubelet[2302]: I0912 23:48:54.472704 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a64b0c2e056175fe58ed5df668cc81f1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a64b0c2e056175fe58ed5df668cc81f1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:54.472840 kubelet[2302]: I0912 23:48:54.472730 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:54.472840 kubelet[2302]: I0912 23:48:54.472754 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:54.472840 kubelet[2302]: I0912 23:48:54.472788 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:48:54.472840 kubelet[2302]: I0912 23:48:54.472803 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a64b0c2e056175fe58ed5df668cc81f1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a64b0c2e056175fe58ed5df668cc81f1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:54.472840 kubelet[2302]: I0912 23:48:54.472817 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:54.472968 kubelet[2302]: I0912 23:48:54.472831 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:54.472968 kubelet[2302]: I0912 23:48:54.472853 2302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:54.477826 systemd[1]: Created slice kubepods-burstable-poda64b0c2e056175fe58ed5df668cc81f1.slice - libcontainer container kubepods-burstable-poda64b0c2e056175fe58ed5df668cc81f1.slice. Sep 12 23:48:54.487708 kubelet[2302]: E0912 23:48:54.487673 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:48:54.542489 kubelet[2302]: I0912 23:48:54.541252 2302 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:48:54.542489 kubelet[2302]: E0912 23:48:54.541630 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Sep 12 23:48:54.674410 kubelet[2302]: E0912 23:48:54.674316 2302 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.101:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.101:6443: connect: connection refused" interval="800ms" Sep 12 23:48:54.750635 containerd[1530]: time="2025-09-12T23:48:54.750561113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 23:48:54.762399 containerd[1530]: time="2025-09-12T23:48:54.762324833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 23:48:54.780783 containerd[1530]: time="2025-09-12T23:48:54.780720073Z" level=info msg="connecting to shim 0fb147878eb7320028db2f05d29437c40183d9dc05d23bf859c9c58183b9bb46" address="unix:///run/containerd/s/3aff9c17638124ffd7712fc1f8979cc0b7d5d3e086d0b3208a1471d4abea50f9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:48:54.790215 containerd[1530]: time="2025-09-12T23:48:54.789805273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a64b0c2e056175fe58ed5df668cc81f1,Namespace:kube-system,Attempt:0,}" Sep 12 23:48:54.799185 containerd[1530]: time="2025-09-12T23:48:54.798954953Z" level=info msg="connecting to shim 5ca4c3f1db5c86d22fdbc37e12cebe4c7253090ee7ac8685fd5fe25cbbf8c23f" address="unix:///run/containerd/s/e438c33f3313dbdfc89c76509bbdfe1ae4df6b2956c7104f2e87d586398ca018" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:48:54.828419 containerd[1530]: time="2025-09-12T23:48:54.828247393Z" level=info msg="connecting to shim d05d537d79512890602f0efaea8fa28b5dc1ae7a335aa74965a2323e98ec113c" address="unix:///run/containerd/s/ae46457bf7a59d9f15882d7b252c6e5383068e4d52a427724f0800b7a24645a7" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:48:54.839417 systemd[1]: Started cri-containerd-0fb147878eb7320028db2f05d29437c40183d9dc05d23bf859c9c58183b9bb46.scope - libcontainer container 0fb147878eb7320028db2f05d29437c40183d9dc05d23bf859c9c58183b9bb46. Sep 12 23:48:54.845912 systemd[1]: Started cri-containerd-5ca4c3f1db5c86d22fdbc37e12cebe4c7253090ee7ac8685fd5fe25cbbf8c23f.scope - libcontainer container 5ca4c3f1db5c86d22fdbc37e12cebe4c7253090ee7ac8685fd5fe25cbbf8c23f. Sep 12 23:48:54.863399 systemd[1]: Started cri-containerd-d05d537d79512890602f0efaea8fa28b5dc1ae7a335aa74965a2323e98ec113c.scope - libcontainer container d05d537d79512890602f0efaea8fa28b5dc1ae7a335aa74965a2323e98ec113c. Sep 12 23:48:54.913356 containerd[1530]: time="2025-09-12T23:48:54.913242593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fb147878eb7320028db2f05d29437c40183d9dc05d23bf859c9c58183b9bb46\"" Sep 12 23:48:54.921784 containerd[1530]: time="2025-09-12T23:48:54.921708913Z" level=info msg="CreateContainer within sandbox \"0fb147878eb7320028db2f05d29437c40183d9dc05d23bf859c9c58183b9bb46\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 23:48:54.922697 containerd[1530]: time="2025-09-12T23:48:54.922633513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ca4c3f1db5c86d22fdbc37e12cebe4c7253090ee7ac8685fd5fe25cbbf8c23f\"" Sep 12 23:48:54.932427 containerd[1530]: time="2025-09-12T23:48:54.932370153Z" level=info msg="CreateContainer within sandbox \"5ca4c3f1db5c86d22fdbc37e12cebe4c7253090ee7ac8685fd5fe25cbbf8c23f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 23:48:54.936049 containerd[1530]: time="2025-09-12T23:48:54.935850753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a64b0c2e056175fe58ed5df668cc81f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d05d537d79512890602f0efaea8fa28b5dc1ae7a335aa74965a2323e98ec113c\"" Sep 12 23:48:54.943675 kubelet[2302]: I0912 23:48:54.943627 2302 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:48:54.943780 containerd[1530]: time="2025-09-12T23:48:54.943710273Z" level=info msg="CreateContainer within sandbox \"d05d537d79512890602f0efaea8fa28b5dc1ae7a335aa74965a2323e98ec113c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 23:48:54.944697 kubelet[2302]: E0912 23:48:54.943993 2302 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.101:6443/api/v1/nodes\": dial tcp 10.0.0.101:6443: connect: connection refused" node="localhost" Sep 12 23:48:54.946739 containerd[1530]: time="2025-09-12T23:48:54.945643833Z" level=info msg="Container 252a5b529118bc7d064749111226d71a4c441816154998cc439ad109a65ac32f: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:48:54.950360 containerd[1530]: time="2025-09-12T23:48:54.950293593Z" level=info msg="Container dc63d8688c6a210f10c17f61f77a16e77fd698eca9fcdc770d1f468b286035e5: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:48:54.957229 containerd[1530]: time="2025-09-12T23:48:54.957090313Z" level=info msg="Container 53c4929f6dc696cea7a74a7bcf775d48ecce2f9a128b91545c36b69c8ba62352: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:48:54.964501 containerd[1530]: time="2025-09-12T23:48:54.964370433Z" level=info msg="CreateContainer within sandbox \"5ca4c3f1db5c86d22fdbc37e12cebe4c7253090ee7ac8685fd5fe25cbbf8c23f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dc63d8688c6a210f10c17f61f77a16e77fd698eca9fcdc770d1f468b286035e5\"" Sep 12 23:48:54.965361 containerd[1530]: time="2025-09-12T23:48:54.965312353Z" level=info msg="StartContainer for \"dc63d8688c6a210f10c17f61f77a16e77fd698eca9fcdc770d1f468b286035e5\"" Sep 12 23:48:54.966544 containerd[1530]: time="2025-09-12T23:48:54.966508233Z" level=info msg="connecting to shim dc63d8688c6a210f10c17f61f77a16e77fd698eca9fcdc770d1f468b286035e5" address="unix:///run/containerd/s/e438c33f3313dbdfc89c76509bbdfe1ae4df6b2956c7104f2e87d586398ca018" protocol=ttrpc version=3 Sep 12 23:48:54.969779 containerd[1530]: time="2025-09-12T23:48:54.969734793Z" level=info msg="CreateContainer within sandbox \"d05d537d79512890602f0efaea8fa28b5dc1ae7a335aa74965a2323e98ec113c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"53c4929f6dc696cea7a74a7bcf775d48ecce2f9a128b91545c36b69c8ba62352\"" Sep 12 23:48:54.970686 containerd[1530]: time="2025-09-12T23:48:54.970636193Z" level=info msg="StartContainer for \"53c4929f6dc696cea7a74a7bcf775d48ecce2f9a128b91545c36b69c8ba62352\"" Sep 12 23:48:54.971838 containerd[1530]: time="2025-09-12T23:48:54.971795993Z" level=info msg="connecting to shim 53c4929f6dc696cea7a74a7bcf775d48ecce2f9a128b91545c36b69c8ba62352" address="unix:///run/containerd/s/ae46457bf7a59d9f15882d7b252c6e5383068e4d52a427724f0800b7a24645a7" protocol=ttrpc version=3 Sep 12 23:48:54.973708 containerd[1530]: time="2025-09-12T23:48:54.973631753Z" level=info msg="CreateContainer within sandbox \"0fb147878eb7320028db2f05d29437c40183d9dc05d23bf859c9c58183b9bb46\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"252a5b529118bc7d064749111226d71a4c441816154998cc439ad109a65ac32f\"" Sep 12 23:48:54.974207 containerd[1530]: time="2025-09-12T23:48:54.974174313Z" level=info msg="StartContainer for \"252a5b529118bc7d064749111226d71a4c441816154998cc439ad109a65ac32f\"" Sep 12 23:48:54.977310 containerd[1530]: time="2025-09-12T23:48:54.977119833Z" level=info msg="connecting to shim 252a5b529118bc7d064749111226d71a4c441816154998cc439ad109a65ac32f" address="unix:///run/containerd/s/3aff9c17638124ffd7712fc1f8979cc0b7d5d3e086d0b3208a1471d4abea50f9" protocol=ttrpc version=3 Sep 12 23:48:54.993441 systemd[1]: Started cri-containerd-dc63d8688c6a210f10c17f61f77a16e77fd698eca9fcdc770d1f468b286035e5.scope - libcontainer container dc63d8688c6a210f10c17f61f77a16e77fd698eca9fcdc770d1f468b286035e5. Sep 12 23:48:55.010532 systemd[1]: Started cri-containerd-252a5b529118bc7d064749111226d71a4c441816154998cc439ad109a65ac32f.scope - libcontainer container 252a5b529118bc7d064749111226d71a4c441816154998cc439ad109a65ac32f. Sep 12 23:48:55.011693 systemd[1]: Started cri-containerd-53c4929f6dc696cea7a74a7bcf775d48ecce2f9a128b91545c36b69c8ba62352.scope - libcontainer container 53c4929f6dc696cea7a74a7bcf775d48ecce2f9a128b91545c36b69c8ba62352. Sep 12 23:48:55.059795 containerd[1530]: time="2025-09-12T23:48:55.058351273Z" level=info msg="StartContainer for \"dc63d8688c6a210f10c17f61f77a16e77fd698eca9fcdc770d1f468b286035e5\" returns successfully" Sep 12 23:48:55.074695 containerd[1530]: time="2025-09-12T23:48:55.074635273Z" level=info msg="StartContainer for \"252a5b529118bc7d064749111226d71a4c441816154998cc439ad109a65ac32f\" returns successfully" Sep 12 23:48:55.085514 containerd[1530]: time="2025-09-12T23:48:55.085453753Z" level=info msg="StartContainer for \"53c4929f6dc696cea7a74a7bcf775d48ecce2f9a128b91545c36b69c8ba62352\" returns successfully" Sep 12 23:48:55.103093 kubelet[2302]: E0912 23:48:55.102879 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:48:55.107165 kubelet[2302]: E0912 23:48:55.107044 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:48:55.111118 kubelet[2302]: E0912 23:48:55.110859 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:48:55.172754 kubelet[2302]: E0912 23:48:55.172715 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.101:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 23:48:55.176636 kubelet[2302]: E0912 23:48:55.176606 2302 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.101:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.101:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 23:48:55.746048 kubelet[2302]: I0912 23:48:55.745747 2302 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:48:56.112642 kubelet[2302]: E0912 23:48:56.112540 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:48:56.113390 kubelet[2302]: E0912 23:48:56.113026 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:48:57.114794 kubelet[2302]: E0912 23:48:57.114746 2302 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 23:48:57.115969 kubelet[2302]: E0912 23:48:57.115941 2302 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 23:48:57.174813 kubelet[2302]: I0912 23:48:57.174756 2302 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 23:48:57.174813 kubelet[2302]: E0912 23:48:57.174805 2302 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 23:48:57.270066 kubelet[2302]: I0912 23:48:57.269996 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:57.276433 kubelet[2302]: E0912 23:48:57.276391 2302 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:48:57.276433 kubelet[2302]: I0912 23:48:57.276422 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:48:57.278179 kubelet[2302]: E0912 23:48:57.278101 2302 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 23:48:57.278179 kubelet[2302]: I0912 23:48:57.278179 2302 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:57.279816 kubelet[2302]: E0912 23:48:57.279771 2302 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 23:48:58.060470 kubelet[2302]: I0912 23:48:58.060403 2302 apiserver.go:52] "Watching apiserver" Sep 12 23:48:58.070542 kubelet[2302]: I0912 23:48:58.070455 2302 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:48:59.407775 systemd[1]: Reload requested from client PID 2585 ('systemctl') (unit session-7.scope)... Sep 12 23:48:59.407793 systemd[1]: Reloading... Sep 12 23:48:59.497169 zram_generator::config[2643]: No configuration found. Sep 12 23:48:59.565827 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 23:48:59.667877 systemd[1]: Reloading finished in 259 ms. Sep 12 23:48:59.693005 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:59.705493 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 23:48:59.705754 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:59.705821 systemd[1]: kubelet.service: Consumed 2.010s CPU time, 127.2M memory peak. Sep 12 23:48:59.707738 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 23:48:59.869181 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 23:48:59.873582 (kubelet)[2670]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 23:48:59.907939 kubelet[2670]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:48:59.907939 kubelet[2670]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 23:48:59.907939 kubelet[2670]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 23:48:59.908287 kubelet[2670]: I0912 23:48:59.908015 2670 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 23:48:59.914181 kubelet[2670]: I0912 23:48:59.913924 2670 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 23:48:59.914181 kubelet[2670]: I0912 23:48:59.913956 2670 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 23:48:59.914625 kubelet[2670]: I0912 23:48:59.914604 2670 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 23:48:59.915929 kubelet[2670]: I0912 23:48:59.915905 2670 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 23:48:59.918830 kubelet[2670]: I0912 23:48:59.918739 2670 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 23:48:59.926531 kubelet[2670]: I0912 23:48:59.926485 2670 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 23:48:59.934755 kubelet[2670]: I0912 23:48:59.934723 2670 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 23:48:59.934970 kubelet[2670]: I0912 23:48:59.934932 2670 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 23:48:59.935116 kubelet[2670]: I0912 23:48:59.934960 2670 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 23:48:59.935203 kubelet[2670]: I0912 23:48:59.935123 2670 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 23:48:59.935203 kubelet[2670]: I0912 23:48:59.935132 2670 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 23:48:59.935406 kubelet[2670]: I0912 23:48:59.935203 2670 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:48:59.935406 kubelet[2670]: I0912 23:48:59.935401 2670 kubelet.go:480] "Attempting to sync node with API server" Sep 12 23:48:59.935446 kubelet[2670]: I0912 23:48:59.935417 2670 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 23:48:59.935446 kubelet[2670]: I0912 23:48:59.935443 2670 kubelet.go:386] "Adding apiserver pod source" Sep 12 23:48:59.935490 kubelet[2670]: I0912 23:48:59.935456 2670 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 23:48:59.938929 kubelet[2670]: I0912 23:48:59.938895 2670 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 12 23:48:59.939597 kubelet[2670]: I0912 23:48:59.939558 2670 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 23:48:59.943338 kubelet[2670]: I0912 23:48:59.943261 2670 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 23:48:59.943338 kubelet[2670]: I0912 23:48:59.943309 2670 server.go:1289] "Started kubelet" Sep 12 23:48:59.944643 kubelet[2670]: I0912 23:48:59.943743 2670 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 23:48:59.946168 kubelet[2670]: I0912 23:48:59.944948 2670 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 23:48:59.946168 kubelet[2670]: I0912 23:48:59.944466 2670 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 23:48:59.946168 kubelet[2670]: I0912 23:48:59.945031 2670 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 23:48:59.948690 kubelet[2670]: I0912 23:48:59.947119 2670 server.go:317] "Adding debug handlers to kubelet server" Sep 12 23:48:59.956208 kubelet[2670]: I0912 23:48:59.956168 2670 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 23:48:59.957328 kubelet[2670]: I0912 23:48:59.957295 2670 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 23:48:59.957552 kubelet[2670]: E0912 23:48:59.957522 2670 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 23:48:59.958392 kubelet[2670]: I0912 23:48:59.958039 2670 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 23:48:59.958392 kubelet[2670]: I0912 23:48:59.958227 2670 reconciler.go:26] "Reconciler: start to sync state" Sep 12 23:48:59.960681 kubelet[2670]: E0912 23:48:59.960649 2670 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 23:48:59.962992 kubelet[2670]: I0912 23:48:59.962833 2670 factory.go:223] Registration of the systemd container factory successfully Sep 12 23:48:59.964259 kubelet[2670]: I0912 23:48:59.962945 2670 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 23:48:59.967820 kubelet[2670]: I0912 23:48:59.967676 2670 factory.go:223] Registration of the containerd container factory successfully Sep 12 23:48:59.970990 kubelet[2670]: I0912 23:48:59.970910 2670 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 23:48:59.973547 kubelet[2670]: I0912 23:48:59.973386 2670 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 23:48:59.973547 kubelet[2670]: I0912 23:48:59.973426 2670 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 23:48:59.973547 kubelet[2670]: I0912 23:48:59.973446 2670 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 23:48:59.973547 kubelet[2670]: I0912 23:48:59.973454 2670 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 23:48:59.973670 kubelet[2670]: E0912 23:48:59.973585 2670 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 23:48:59.998103 kubelet[2670]: I0912 23:48:59.998046 2670 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 23:48:59.998103 kubelet[2670]: I0912 23:48:59.998070 2670 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 23:48:59.998103 kubelet[2670]: I0912 23:48:59.998104 2670 state_mem.go:36] "Initialized new in-memory state store" Sep 12 23:48:59.998267 kubelet[2670]: I0912 23:48:59.998250 2670 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 23:48:59.998287 kubelet[2670]: I0912 23:48:59.998264 2670 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 23:48:59.998287 kubelet[2670]: I0912 23:48:59.998280 2670 policy_none.go:49] "None policy: Start" Sep 12 23:48:59.998327 kubelet[2670]: I0912 23:48:59.998289 2670 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 23:48:59.998327 kubelet[2670]: I0912 23:48:59.998298 2670 state_mem.go:35] "Initializing new in-memory state store" Sep 12 23:48:59.998433 kubelet[2670]: I0912 23:48:59.998386 2670 state_mem.go:75] "Updated machine memory state" Sep 12 23:49:00.004820 kubelet[2670]: E0912 23:49:00.004772 2670 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 23:49:00.004999 kubelet[2670]: I0912 23:49:00.004930 2670 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 23:49:00.004999 kubelet[2670]: I0912 23:49:00.004954 2670 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 23:49:00.005833 kubelet[2670]: I0912 23:49:00.005796 2670 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 23:49:00.006642 kubelet[2670]: E0912 23:49:00.006607 2670 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 23:49:00.074957 kubelet[2670]: I0912 23:49:00.074892 2670 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:49:00.076072 kubelet[2670]: I0912 23:49:00.075075 2670 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 23:49:00.076171 kubelet[2670]: I0912 23:49:00.076119 2670 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 23:49:00.106556 kubelet[2670]: I0912 23:49:00.106530 2670 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 23:49:00.120632 kubelet[2670]: I0912 23:49:00.120584 2670 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 23:49:00.120761 kubelet[2670]: I0912 23:49:00.120676 2670 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 23:49:00.159585 kubelet[2670]: I0912 23:49:00.159533 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a64b0c2e056175fe58ed5df668cc81f1-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a64b0c2e056175fe58ed5df668cc81f1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:49:00.159585 kubelet[2670]: I0912 23:49:00.159589 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:49:00.159741 kubelet[2670]: I0912 23:49:00.159616 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:49:00.159741 kubelet[2670]: I0912 23:49:00.159646 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a64b0c2e056175fe58ed5df668cc81f1-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a64b0c2e056175fe58ed5df668cc81f1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:49:00.159741 kubelet[2670]: I0912 23:49:00.159661 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a64b0c2e056175fe58ed5df668cc81f1-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a64b0c2e056175fe58ed5df668cc81f1\") " pod="kube-system/kube-apiserver-localhost" Sep 12 23:49:00.159741 kubelet[2670]: I0912 23:49:00.159675 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:49:00.159741 kubelet[2670]: I0912 23:49:00.159691 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:49:00.159835 kubelet[2670]: I0912 23:49:00.159724 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 23:49:00.159835 kubelet[2670]: I0912 23:49:00.159743 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 23:49:00.936536 kubelet[2670]: I0912 23:49:00.936487 2670 apiserver.go:52] "Watching apiserver" Sep 12 23:49:00.958837 kubelet[2670]: I0912 23:49:00.958780 2670 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 23:49:00.989161 kubelet[2670]: I0912 23:49:00.989083 2670 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 23:49:01.016654 kubelet[2670]: E0912 23:49:01.016611 2670 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 23:49:01.049875 kubelet[2670]: I0912 23:49:01.049723 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.049705567 podStartE2EDuration="1.049705567s" podCreationTimestamp="2025-09-12 23:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:49:01.037983664 +0000 UTC m=+1.161298352" watchObservedRunningTime="2025-09-12 23:49:01.049705567 +0000 UTC m=+1.173020255" Sep 12 23:49:01.060932 kubelet[2670]: I0912 23:49:01.060868 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.060851509 podStartE2EDuration="1.060851509s" podCreationTimestamp="2025-09-12 23:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:49:01.049911408 +0000 UTC m=+1.173226096" watchObservedRunningTime="2025-09-12 23:49:01.060851509 +0000 UTC m=+1.184166197" Sep 12 23:49:01.061116 kubelet[2670]: I0912 23:49:01.061003 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.060998389 podStartE2EDuration="1.060998389s" podCreationTimestamp="2025-09-12 23:49:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:49:01.060325148 +0000 UTC m=+1.183639796" watchObservedRunningTime="2025-09-12 23:49:01.060998389 +0000 UTC m=+1.184313077" Sep 12 23:49:04.996863 kubelet[2670]: I0912 23:49:04.996805 2670 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 23:49:04.997935 containerd[1530]: time="2025-09-12T23:49:04.997879653Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 23:49:04.998244 kubelet[2670]: I0912 23:49:04.998065 2670 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 23:49:05.681896 systemd[1]: Created slice kubepods-besteffort-pod3a510c5f_7b20_4c8c_a63f_625bb5a98355.slice - libcontainer container kubepods-besteffort-pod3a510c5f_7b20_4c8c_a63f_625bb5a98355.slice. Sep 12 23:49:05.695387 kubelet[2670]: I0912 23:49:05.695295 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3a510c5f-7b20-4c8c-a63f-625bb5a98355-xtables-lock\") pod \"kube-proxy-8gbdm\" (UID: \"3a510c5f-7b20-4c8c-a63f-625bb5a98355\") " pod="kube-system/kube-proxy-8gbdm" Sep 12 23:49:05.695387 kubelet[2670]: I0912 23:49:05.695386 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3a510c5f-7b20-4c8c-a63f-625bb5a98355-lib-modules\") pod \"kube-proxy-8gbdm\" (UID: \"3a510c5f-7b20-4c8c-a63f-625bb5a98355\") " pod="kube-system/kube-proxy-8gbdm" Sep 12 23:49:05.695526 kubelet[2670]: I0912 23:49:05.695405 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3a510c5f-7b20-4c8c-a63f-625bb5a98355-kube-proxy\") pod \"kube-proxy-8gbdm\" (UID: \"3a510c5f-7b20-4c8c-a63f-625bb5a98355\") " pod="kube-system/kube-proxy-8gbdm" Sep 12 23:49:05.695526 kubelet[2670]: I0912 23:49:05.695421 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29g89\" (UniqueName: \"kubernetes.io/projected/3a510c5f-7b20-4c8c-a63f-625bb5a98355-kube-api-access-29g89\") pod \"kube-proxy-8gbdm\" (UID: \"3a510c5f-7b20-4c8c-a63f-625bb5a98355\") " pod="kube-system/kube-proxy-8gbdm" Sep 12 23:49:05.804256 kubelet[2670]: E0912 23:49:05.804219 2670 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 23:49:05.804256 kubelet[2670]: E0912 23:49:05.804252 2670 projected.go:194] Error preparing data for projected volume kube-api-access-29g89 for pod kube-system/kube-proxy-8gbdm: configmap "kube-root-ca.crt" not found Sep 12 23:49:05.804404 kubelet[2670]: E0912 23:49:05.804352 2670 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3a510c5f-7b20-4c8c-a63f-625bb5a98355-kube-api-access-29g89 podName:3a510c5f-7b20-4c8c-a63f-625bb5a98355 nodeName:}" failed. No retries permitted until 2025-09-12 23:49:06.30429655 +0000 UTC m=+6.427611238 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-29g89" (UniqueName: "kubernetes.io/projected/3a510c5f-7b20-4c8c-a63f-625bb5a98355-kube-api-access-29g89") pod "kube-proxy-8gbdm" (UID: "3a510c5f-7b20-4c8c-a63f-625bb5a98355") : configmap "kube-root-ca.crt" not found Sep 12 23:49:06.246660 systemd[1]: Created slice kubepods-besteffort-pode286074b_857c_4ba7_ac4c_f6f46a440b95.slice - libcontainer container kubepods-besteffort-pode286074b_857c_4ba7_ac4c_f6f46a440b95.slice. Sep 12 23:49:06.300821 kubelet[2670]: I0912 23:49:06.300763 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e286074b-857c-4ba7-ac4c-f6f46a440b95-var-lib-calico\") pod \"tigera-operator-755d956888-wbpv2\" (UID: \"e286074b-857c-4ba7-ac4c-f6f46a440b95\") " pod="tigera-operator/tigera-operator-755d956888-wbpv2" Sep 12 23:49:06.300821 kubelet[2670]: I0912 23:49:06.300803 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwnzn\" (UniqueName: \"kubernetes.io/projected/e286074b-857c-4ba7-ac4c-f6f46a440b95-kube-api-access-xwnzn\") pod \"tigera-operator-755d956888-wbpv2\" (UID: \"e286074b-857c-4ba7-ac4c-f6f46a440b95\") " pod="tigera-operator/tigera-operator-755d956888-wbpv2" Sep 12 23:49:06.550846 containerd[1530]: time="2025-09-12T23:49:06.550734863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-wbpv2,Uid:e286074b-857c-4ba7-ac4c-f6f46a440b95,Namespace:tigera-operator,Attempt:0,}" Sep 12 23:49:06.594613 containerd[1530]: time="2025-09-12T23:49:06.594559605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8gbdm,Uid:3a510c5f-7b20-4c8c-a63f-625bb5a98355,Namespace:kube-system,Attempt:0,}" Sep 12 23:49:06.816263 containerd[1530]: time="2025-09-12T23:49:06.815684318Z" level=info msg="connecting to shim 9b31248a142283ab28c4aeb73815f12d4ca75c20b034012c07ed2b44285945f6" address="unix:///run/containerd/s/7134d1f195b25ce3078f4b2cc427e1e04178ea802839aba991fc73a5126a8b04" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:06.846369 systemd[1]: Started cri-containerd-9b31248a142283ab28c4aeb73815f12d4ca75c20b034012c07ed2b44285945f6.scope - libcontainer container 9b31248a142283ab28c4aeb73815f12d4ca75c20b034012c07ed2b44285945f6. Sep 12 23:49:06.908210 containerd[1530]: time="2025-09-12T23:49:06.908168889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-wbpv2,Uid:e286074b-857c-4ba7-ac4c-f6f46a440b95,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9b31248a142283ab28c4aeb73815f12d4ca75c20b034012c07ed2b44285945f6\"" Sep 12 23:49:06.910103 containerd[1530]: time="2025-09-12T23:49:06.909993811Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 23:49:06.947682 containerd[1530]: time="2025-09-12T23:49:06.947625504Z" level=info msg="connecting to shim 4234b0ba5a68eb3fd16241640b4d38f216433de27bba3c93c5db351f3f889f8d" address="unix:///run/containerd/s/4d808deac42a1d83698a8f03075766b9e6f17318d868e77f8a95af76d881ce5f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:06.981409 systemd[1]: Started cri-containerd-4234b0ba5a68eb3fd16241640b4d38f216433de27bba3c93c5db351f3f889f8d.scope - libcontainer container 4234b0ba5a68eb3fd16241640b4d38f216433de27bba3c93c5db351f3f889f8d. Sep 12 23:49:07.017962 containerd[1530]: time="2025-09-12T23:49:07.017872282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8gbdm,Uid:3a510c5f-7b20-4c8c-a63f-625bb5a98355,Namespace:kube-system,Attempt:0,} returns sandbox id \"4234b0ba5a68eb3fd16241640b4d38f216433de27bba3c93c5db351f3f889f8d\"" Sep 12 23:49:07.050121 containerd[1530]: time="2025-09-12T23:49:07.050078405Z" level=info msg="CreateContainer within sandbox \"4234b0ba5a68eb3fd16241640b4d38f216433de27bba3c93c5db351f3f889f8d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 23:49:07.071641 containerd[1530]: time="2025-09-12T23:49:07.071543753Z" level=info msg="Container 470c2b7093bde84ecbae158f5b84a53c10b500b71bd8074876f064d1f059c077: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:07.082176 containerd[1530]: time="2025-09-12T23:49:07.081974887Z" level=info msg="CreateContainer within sandbox \"4234b0ba5a68eb3fd16241640b4d38f216433de27bba3c93c5db351f3f889f8d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"470c2b7093bde84ecbae158f5b84a53c10b500b71bd8074876f064d1f059c077\"" Sep 12 23:49:07.084107 containerd[1530]: time="2025-09-12T23:49:07.082782288Z" level=info msg="StartContainer for \"470c2b7093bde84ecbae158f5b84a53c10b500b71bd8074876f064d1f059c077\"" Sep 12 23:49:07.084698 containerd[1530]: time="2025-09-12T23:49:07.084503130Z" level=info msg="connecting to shim 470c2b7093bde84ecbae158f5b84a53c10b500b71bd8074876f064d1f059c077" address="unix:///run/containerd/s/4d808deac42a1d83698a8f03075766b9e6f17318d868e77f8a95af76d881ce5f" protocol=ttrpc version=3 Sep 12 23:49:07.117751 systemd[1]: Started cri-containerd-470c2b7093bde84ecbae158f5b84a53c10b500b71bd8074876f064d1f059c077.scope - libcontainer container 470c2b7093bde84ecbae158f5b84a53c10b500b71bd8074876f064d1f059c077. Sep 12 23:49:07.156314 containerd[1530]: time="2025-09-12T23:49:07.156232426Z" level=info msg="StartContainer for \"470c2b7093bde84ecbae158f5b84a53c10b500b71bd8074876f064d1f059c077\" returns successfully" Sep 12 23:49:09.923067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144543470.mount: Deactivated successfully. Sep 12 23:49:10.052032 kubelet[2670]: I0912 23:49:10.049100 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8gbdm" podStartSLOduration=5.049081365 podStartE2EDuration="5.049081365s" podCreationTimestamp="2025-09-12 23:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:49:08.022624332 +0000 UTC m=+8.145938980" watchObservedRunningTime="2025-09-12 23:49:10.049081365 +0000 UTC m=+10.172396053" Sep 12 23:49:10.353228 containerd[1530]: time="2025-09-12T23:49:10.353181977Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:10.357569 containerd[1530]: time="2025-09-12T23:49:10.357505182Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 12 23:49:10.363283 containerd[1530]: time="2025-09-12T23:49:10.363236588Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:10.372949 containerd[1530]: time="2025-09-12T23:49:10.372893118Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 3.462861267s" Sep 12 23:49:10.373047 containerd[1530]: time="2025-09-12T23:49:10.372940999Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 12 23:49:10.379440 containerd[1530]: time="2025-09-12T23:49:10.379398806Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:10.391357 containerd[1530]: time="2025-09-12T23:49:10.391264299Z" level=info msg="CreateContainer within sandbox \"9b31248a142283ab28c4aeb73815f12d4ca75c20b034012c07ed2b44285945f6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 23:49:10.407448 containerd[1530]: time="2025-09-12T23:49:10.407387876Z" level=info msg="Container 2c5ec0cfe1bb608ee21e7e652ee5783f5e342c4b1458beb9414b8f85b7f22677: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:10.429341 containerd[1530]: time="2025-09-12T23:49:10.429285420Z" level=info msg="CreateContainer within sandbox \"9b31248a142283ab28c4aeb73815f12d4ca75c20b034012c07ed2b44285945f6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"2c5ec0cfe1bb608ee21e7e652ee5783f5e342c4b1458beb9414b8f85b7f22677\"" Sep 12 23:49:10.429870 containerd[1530]: time="2025-09-12T23:49:10.429844981Z" level=info msg="StartContainer for \"2c5ec0cfe1bb608ee21e7e652ee5783f5e342c4b1458beb9414b8f85b7f22677\"" Sep 12 23:49:10.431131 containerd[1530]: time="2025-09-12T23:49:10.430866222Z" level=info msg="connecting to shim 2c5ec0cfe1bb608ee21e7e652ee5783f5e342c4b1458beb9414b8f85b7f22677" address="unix:///run/containerd/s/7134d1f195b25ce3078f4b2cc427e1e04178ea802839aba991fc73a5126a8b04" protocol=ttrpc version=3 Sep 12 23:49:10.481350 systemd[1]: Started cri-containerd-2c5ec0cfe1bb608ee21e7e652ee5783f5e342c4b1458beb9414b8f85b7f22677.scope - libcontainer container 2c5ec0cfe1bb608ee21e7e652ee5783f5e342c4b1458beb9414b8f85b7f22677. Sep 12 23:49:10.533048 containerd[1530]: time="2025-09-12T23:49:10.533008813Z" level=info msg="StartContainer for \"2c5ec0cfe1bb608ee21e7e652ee5783f5e342c4b1458beb9414b8f85b7f22677\" returns successfully" Sep 12 23:49:11.026187 kubelet[2670]: I0912 23:49:11.026015 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-wbpv2" podStartSLOduration=1.556898115 podStartE2EDuration="5.02599783s" podCreationTimestamp="2025-09-12 23:49:06 +0000 UTC" firstStartedPulling="2025-09-12 23:49:06.90945341 +0000 UTC m=+7.032768098" lastFinishedPulling="2025-09-12 23:49:10.378553165 +0000 UTC m=+10.501867813" observedRunningTime="2025-09-12 23:49:11.025467069 +0000 UTC m=+11.148781717" watchObservedRunningTime="2025-09-12 23:49:11.02599783 +0000 UTC m=+11.149312518" Sep 12 23:49:13.127276 update_engine[1514]: I20250912 23:49:13.127182 1514 update_attempter.cc:509] Updating boot flags... Sep 12 23:49:16.453078 sudo[1738]: pam_unix(sudo:session): session closed for user root Sep 12 23:49:16.456343 sshd[1737]: Connection closed by 10.0.0.1 port 50644 Sep 12 23:49:16.457258 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:16.462422 systemd-logind[1508]: Session 7 logged out. Waiting for processes to exit. Sep 12 23:49:16.462676 systemd[1]: sshd@6-10.0.0.101:22-10.0.0.1:50644.service: Deactivated successfully. Sep 12 23:49:16.473208 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 23:49:16.475070 systemd[1]: session-7.scope: Consumed 7.446s CPU time, 218M memory peak. Sep 12 23:49:16.484216 systemd-logind[1508]: Removed session 7. Sep 12 23:49:20.293003 systemd[1]: Created slice kubepods-besteffort-pod549aa1c4_ea60_4195_b72a_7de39b8acdd2.slice - libcontainer container kubepods-besteffort-pod549aa1c4_ea60_4195_b72a_7de39b8acdd2.slice. Sep 12 23:49:20.304153 kubelet[2670]: I0912 23:49:20.304099 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/549aa1c4-ea60-4195-b72a-7de39b8acdd2-typha-certs\") pod \"calico-typha-5c99ccfb99-l68gk\" (UID: \"549aa1c4-ea60-4195-b72a-7de39b8acdd2\") " pod="calico-system/calico-typha-5c99ccfb99-l68gk" Sep 12 23:49:20.304153 kubelet[2670]: I0912 23:49:20.304152 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5slls\" (UniqueName: \"kubernetes.io/projected/549aa1c4-ea60-4195-b72a-7de39b8acdd2-kube-api-access-5slls\") pod \"calico-typha-5c99ccfb99-l68gk\" (UID: \"549aa1c4-ea60-4195-b72a-7de39b8acdd2\") " pod="calico-system/calico-typha-5c99ccfb99-l68gk" Sep 12 23:49:20.304789 kubelet[2670]: I0912 23:49:20.304213 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/549aa1c4-ea60-4195-b72a-7de39b8acdd2-tigera-ca-bundle\") pod \"calico-typha-5c99ccfb99-l68gk\" (UID: \"549aa1c4-ea60-4195-b72a-7de39b8acdd2\") " pod="calico-system/calico-typha-5c99ccfb99-l68gk" Sep 12 23:49:20.596536 containerd[1530]: time="2025-09-12T23:49:20.596426402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c99ccfb99-l68gk,Uid:549aa1c4-ea60-4195-b72a-7de39b8acdd2,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:20.631788 containerd[1530]: time="2025-09-12T23:49:20.631654182Z" level=info msg="connecting to shim 225e782b46c43d3567f8ef6e8143e74fa5dd5644f11d6637e2d1eea57b40ee49" address="unix:///run/containerd/s/31c860b8d31e2e20fcdf3fab595ea32000c44e3361fa98152d933194b57840b0" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:20.674368 systemd[1]: Started cri-containerd-225e782b46c43d3567f8ef6e8143e74fa5dd5644f11d6637e2d1eea57b40ee49.scope - libcontainer container 225e782b46c43d3567f8ef6e8143e74fa5dd5644f11d6637e2d1eea57b40ee49. Sep 12 23:49:20.686906 systemd[1]: Created slice kubepods-besteffort-pod79542ded_be1e_4311_94d6_c6d8541e8a56.slice - libcontainer container kubepods-besteffort-pod79542ded_be1e_4311_94d6_c6d8541e8a56.slice. Sep 12 23:49:20.711510 kubelet[2670]: I0912 23:49:20.711466 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/79542ded-be1e-4311-94d6-c6d8541e8a56-cni-log-dir\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.711510 kubelet[2670]: I0912 23:49:20.711511 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/79542ded-be1e-4311-94d6-c6d8541e8a56-var-run-calico\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.711651 kubelet[2670]: I0912 23:49:20.711528 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/79542ded-be1e-4311-94d6-c6d8541e8a56-node-certs\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.711651 kubelet[2670]: I0912 23:49:20.711546 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/79542ded-be1e-4311-94d6-c6d8541e8a56-lib-modules\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.711651 kubelet[2670]: I0912 23:49:20.711565 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/79542ded-be1e-4311-94d6-c6d8541e8a56-cni-net-dir\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.711712 kubelet[2670]: I0912 23:49:20.711644 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/79542ded-be1e-4311-94d6-c6d8541e8a56-flexvol-driver-host\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.711712 kubelet[2670]: I0912 23:49:20.711673 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/79542ded-be1e-4311-94d6-c6d8541e8a56-policysync\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.711712 kubelet[2670]: I0912 23:49:20.711705 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79542ded-be1e-4311-94d6-c6d8541e8a56-tigera-ca-bundle\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.711782 kubelet[2670]: I0912 23:49:20.711719 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/79542ded-be1e-4311-94d6-c6d8541e8a56-var-lib-calico\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.711782 kubelet[2670]: I0912 23:49:20.711735 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/79542ded-be1e-4311-94d6-c6d8541e8a56-cni-bin-dir\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.711782 kubelet[2670]: I0912 23:49:20.711751 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/79542ded-be1e-4311-94d6-c6d8541e8a56-xtables-lock\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.711782 kubelet[2670]: I0912 23:49:20.711777 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp5kv\" (UniqueName: \"kubernetes.io/projected/79542ded-be1e-4311-94d6-c6d8541e8a56-kube-api-access-qp5kv\") pod \"calico-node-xmx7s\" (UID: \"79542ded-be1e-4311-94d6-c6d8541e8a56\") " pod="calico-system/calico-node-xmx7s" Sep 12 23:49:20.735635 containerd[1530]: time="2025-09-12T23:49:20.735592002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c99ccfb99-l68gk,Uid:549aa1c4-ea60-4195-b72a-7de39b8acdd2,Namespace:calico-system,Attempt:0,} returns sandbox id \"225e782b46c43d3567f8ef6e8143e74fa5dd5644f11d6637e2d1eea57b40ee49\"" Sep 12 23:49:20.744547 containerd[1530]: time="2025-09-12T23:49:20.744511447Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 23:49:20.829107 kubelet[2670]: E0912 23:49:20.829075 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.829107 kubelet[2670]: W0912 23:49:20.829100 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.832017 kubelet[2670]: E0912 23:49:20.831985 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.958215 kubelet[2670]: E0912 23:49:20.958047 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6grrt" podUID="4b1d8eb9-d498-4006-84a4-c9d3a374aa3e" Sep 12 23:49:20.987387 kubelet[2670]: E0912 23:49:20.987357 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.987387 kubelet[2670]: W0912 23:49:20.987379 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.987387 kubelet[2670]: E0912 23:49:20.987397 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.987575 kubelet[2670]: E0912 23:49:20.987561 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.987617 kubelet[2670]: W0912 23:49:20.987572 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.987648 kubelet[2670]: E0912 23:49:20.987619 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.987768 kubelet[2670]: E0912 23:49:20.987757 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.987800 kubelet[2670]: W0912 23:49:20.987767 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.987800 kubelet[2670]: E0912 23:49:20.987777 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.987906 kubelet[2670]: E0912 23:49:20.987895 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.987906 kubelet[2670]: W0912 23:49:20.987905 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.987948 kubelet[2670]: E0912 23:49:20.987912 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.988048 kubelet[2670]: E0912 23:49:20.988038 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.988070 kubelet[2670]: W0912 23:49:20.988047 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.988070 kubelet[2670]: E0912 23:49:20.988055 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.988202 kubelet[2670]: E0912 23:49:20.988192 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.988202 kubelet[2670]: W0912 23:49:20.988202 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.988251 kubelet[2670]: E0912 23:49:20.988209 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.988343 kubelet[2670]: E0912 23:49:20.988332 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.988343 kubelet[2670]: W0912 23:49:20.988342 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.988388 kubelet[2670]: E0912 23:49:20.988350 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.988475 kubelet[2670]: E0912 23:49:20.988465 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.988475 kubelet[2670]: W0912 23:49:20.988475 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.988518 kubelet[2670]: E0912 23:49:20.988482 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.988612 kubelet[2670]: E0912 23:49:20.988603 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.988632 kubelet[2670]: W0912 23:49:20.988612 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.988632 kubelet[2670]: E0912 23:49:20.988620 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.988750 kubelet[2670]: E0912 23:49:20.988740 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.988773 kubelet[2670]: W0912 23:49:20.988749 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.988773 kubelet[2670]: E0912 23:49:20.988757 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.988881 kubelet[2670]: E0912 23:49:20.988871 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.988903 kubelet[2670]: W0912 23:49:20.988880 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.988903 kubelet[2670]: E0912 23:49:20.988888 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.989007 kubelet[2670]: E0912 23:49:20.988997 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.989007 kubelet[2670]: W0912 23:49:20.989006 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.989043 kubelet[2670]: E0912 23:49:20.989012 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.989144 kubelet[2670]: E0912 23:49:20.989128 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.989170 kubelet[2670]: W0912 23:49:20.989156 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.989170 kubelet[2670]: E0912 23:49:20.989165 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.989317 kubelet[2670]: E0912 23:49:20.989307 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.989317 kubelet[2670]: W0912 23:49:20.989316 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.989361 kubelet[2670]: E0912 23:49:20.989323 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.989440 kubelet[2670]: E0912 23:49:20.989430 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.989440 kubelet[2670]: W0912 23:49:20.989440 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.989489 kubelet[2670]: E0912 23:49:20.989447 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.989565 kubelet[2670]: E0912 23:49:20.989556 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.989587 kubelet[2670]: W0912 23:49:20.989565 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.989587 kubelet[2670]: E0912 23:49:20.989572 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.989704 kubelet[2670]: E0912 23:49:20.989695 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.989723 kubelet[2670]: W0912 23:49:20.989704 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.989723 kubelet[2670]: E0912 23:49:20.989711 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.989827 kubelet[2670]: E0912 23:49:20.989818 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.989847 kubelet[2670]: W0912 23:49:20.989827 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.989847 kubelet[2670]: E0912 23:49:20.989834 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.989954 kubelet[2670]: E0912 23:49:20.989945 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.989975 kubelet[2670]: W0912 23:49:20.989954 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.989975 kubelet[2670]: E0912 23:49:20.989961 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.990081 kubelet[2670]: E0912 23:49:20.990071 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:20.990101 kubelet[2670]: W0912 23:49:20.990081 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:20.990101 kubelet[2670]: E0912 23:49:20.990088 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:20.991678 containerd[1530]: time="2025-09-12T23:49:20.991643628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xmx7s,Uid:79542ded-be1e-4311-94d6-c6d8541e8a56,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:21.013584 kubelet[2670]: E0912 23:49:21.013547 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.013584 kubelet[2670]: W0912 23:49:21.013569 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.013584 kubelet[2670]: E0912 23:49:21.013586 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.013737 kubelet[2670]: I0912 23:49:21.013614 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4b1d8eb9-d498-4006-84a4-c9d3a374aa3e-kubelet-dir\") pod \"csi-node-driver-6grrt\" (UID: \"4b1d8eb9-d498-4006-84a4-c9d3a374aa3e\") " pod="calico-system/csi-node-driver-6grrt" Sep 12 23:49:21.014233 kubelet[2670]: E0912 23:49:21.013788 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.014233 kubelet[2670]: W0912 23:49:21.013802 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.014233 kubelet[2670]: E0912 23:49:21.013811 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.014233 kubelet[2670]: I0912 23:49:21.013832 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4b1d8eb9-d498-4006-84a4-c9d3a374aa3e-registration-dir\") pod \"csi-node-driver-6grrt\" (UID: \"4b1d8eb9-d498-4006-84a4-c9d3a374aa3e\") " pod="calico-system/csi-node-driver-6grrt" Sep 12 23:49:21.014233 kubelet[2670]: E0912 23:49:21.014012 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.014233 kubelet[2670]: W0912 23:49:21.014021 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.014233 kubelet[2670]: E0912 23:49:21.014030 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.014233 kubelet[2670]: I0912 23:49:21.014049 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4b1d8eb9-d498-4006-84a4-c9d3a374aa3e-varrun\") pod \"csi-node-driver-6grrt\" (UID: \"4b1d8eb9-d498-4006-84a4-c9d3a374aa3e\") " pod="calico-system/csi-node-driver-6grrt" Sep 12 23:49:21.014233 kubelet[2670]: E0912 23:49:21.014229 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.014455 kubelet[2670]: W0912 23:49:21.014246 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.014455 kubelet[2670]: E0912 23:49:21.014259 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.014455 kubelet[2670]: E0912 23:49:21.014432 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.014455 kubelet[2670]: W0912 23:49:21.014440 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.014455 kubelet[2670]: E0912 23:49:21.014448 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.014642 kubelet[2670]: E0912 23:49:21.014625 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.014642 kubelet[2670]: W0912 23:49:21.014635 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.014642 kubelet[2670]: E0912 23:49:21.014643 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.014820 kubelet[2670]: E0912 23:49:21.014803 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.014820 kubelet[2670]: W0912 23:49:21.014814 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.014820 kubelet[2670]: E0912 23:49:21.014822 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.014947 containerd[1530]: time="2025-09-12T23:49:21.014912601Z" level=info msg="connecting to shim 2489271798a602295c01024dd58d1d17eb9b36ce4261ebb2a9e9d89e608f1d8c" address="unix:///run/containerd/s/eee8861131537a42bbfe385255acb2d4c369cd18b90eab58814e1d6f39d5ed61" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:21.014992 kubelet[2670]: E0912 23:49:21.014964 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.014992 kubelet[2670]: W0912 23:49:21.014973 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.014992 kubelet[2670]: E0912 23:49:21.014980 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.015048 kubelet[2670]: I0912 23:49:21.015001 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4b1d8eb9-d498-4006-84a4-c9d3a374aa3e-socket-dir\") pod \"csi-node-driver-6grrt\" (UID: \"4b1d8eb9-d498-4006-84a4-c9d3a374aa3e\") " pod="calico-system/csi-node-driver-6grrt" Sep 12 23:49:21.015225 kubelet[2670]: E0912 23:49:21.015200 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.015225 kubelet[2670]: W0912 23:49:21.015216 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.015225 kubelet[2670]: E0912 23:49:21.015227 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.015431 kubelet[2670]: E0912 23:49:21.015412 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.015431 kubelet[2670]: W0912 23:49:21.015425 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.015493 kubelet[2670]: E0912 23:49:21.015435 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.015603 kubelet[2670]: E0912 23:49:21.015587 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.015603 kubelet[2670]: W0912 23:49:21.015599 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.015672 kubelet[2670]: E0912 23:49:21.015608 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.015672 kubelet[2670]: I0912 23:49:21.015635 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzvqb\" (UniqueName: \"kubernetes.io/projected/4b1d8eb9-d498-4006-84a4-c9d3a374aa3e-kube-api-access-pzvqb\") pod \"csi-node-driver-6grrt\" (UID: \"4b1d8eb9-d498-4006-84a4-c9d3a374aa3e\") " pod="calico-system/csi-node-driver-6grrt" Sep 12 23:49:21.016068 kubelet[2670]: E0912 23:49:21.015793 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.016068 kubelet[2670]: W0912 23:49:21.015807 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.016068 kubelet[2670]: E0912 23:49:21.015817 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.016068 kubelet[2670]: E0912 23:49:21.015940 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.016068 kubelet[2670]: W0912 23:49:21.015947 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.016068 kubelet[2670]: E0912 23:49:21.015954 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.016328 kubelet[2670]: E0912 23:49:21.016229 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.016328 kubelet[2670]: W0912 23:49:21.016239 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.016328 kubelet[2670]: E0912 23:49:21.016250 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.016417 kubelet[2670]: E0912 23:49:21.016400 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.016417 kubelet[2670]: W0912 23:49:21.016411 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.016462 kubelet[2670]: E0912 23:49:21.016419 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.041312 systemd[1]: Started cri-containerd-2489271798a602295c01024dd58d1d17eb9b36ce4261ebb2a9e9d89e608f1d8c.scope - libcontainer container 2489271798a602295c01024dd58d1d17eb9b36ce4261ebb2a9e9d89e608f1d8c. Sep 12 23:49:21.064090 containerd[1530]: time="2025-09-12T23:49:21.063967068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xmx7s,Uid:79542ded-be1e-4311-94d6-c6d8541e8a56,Namespace:calico-system,Attempt:0,} returns sandbox id \"2489271798a602295c01024dd58d1d17eb9b36ce4261ebb2a9e9d89e608f1d8c\"" Sep 12 23:49:21.117002 kubelet[2670]: E0912 23:49:21.116964 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.117002 kubelet[2670]: W0912 23:49:21.116989 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.117002 kubelet[2670]: E0912 23:49:21.117008 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.117216 kubelet[2670]: E0912 23:49:21.117203 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.117216 kubelet[2670]: W0912 23:49:21.117213 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.117259 kubelet[2670]: E0912 23:49:21.117222 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.117398 kubelet[2670]: E0912 23:49:21.117386 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.117398 kubelet[2670]: W0912 23:49:21.117396 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.117441 kubelet[2670]: E0912 23:49:21.117406 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.117584 kubelet[2670]: E0912 23:49:21.117572 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.117584 kubelet[2670]: W0912 23:49:21.117582 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.117631 kubelet[2670]: E0912 23:49:21.117590 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.117749 kubelet[2670]: E0912 23:49:21.117737 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.117749 kubelet[2670]: W0912 23:49:21.117747 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.117788 kubelet[2670]: E0912 23:49:21.117755 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.117935 kubelet[2670]: E0912 23:49:21.117925 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.117935 kubelet[2670]: W0912 23:49:21.117934 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.117991 kubelet[2670]: E0912 23:49:21.117943 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.118085 kubelet[2670]: E0912 23:49:21.118075 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.118085 kubelet[2670]: W0912 23:49:21.118084 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.118135 kubelet[2670]: E0912 23:49:21.118092 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.118226 kubelet[2670]: E0912 23:49:21.118214 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.118226 kubelet[2670]: W0912 23:49:21.118223 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.118324 kubelet[2670]: E0912 23:49:21.118231 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.118378 kubelet[2670]: E0912 23:49:21.118367 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.118378 kubelet[2670]: W0912 23:49:21.118376 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.118428 kubelet[2670]: E0912 23:49:21.118383 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.118529 kubelet[2670]: E0912 23:49:21.118516 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.118529 kubelet[2670]: W0912 23:49:21.118527 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.118606 kubelet[2670]: E0912 23:49:21.118543 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.118685 kubelet[2670]: E0912 23:49:21.118675 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.118685 kubelet[2670]: W0912 23:49:21.118683 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.118727 kubelet[2670]: E0912 23:49:21.118691 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.118831 kubelet[2670]: E0912 23:49:21.118820 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.118831 kubelet[2670]: W0912 23:49:21.118829 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.118901 kubelet[2670]: E0912 23:49:21.118837 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.118986 kubelet[2670]: E0912 23:49:21.118975 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.118986 kubelet[2670]: W0912 23:49:21.118985 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.119029 kubelet[2670]: E0912 23:49:21.118993 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.119369 kubelet[2670]: E0912 23:49:21.119290 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.119369 kubelet[2670]: W0912 23:49:21.119311 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.119369 kubelet[2670]: E0912 23:49:21.119324 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.119649 kubelet[2670]: E0912 23:49:21.119636 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.119705 kubelet[2670]: W0912 23:49:21.119695 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.119755 kubelet[2670]: E0912 23:49:21.119745 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.120054 kubelet[2670]: E0912 23:49:21.119998 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.120054 kubelet[2670]: W0912 23:49:21.120010 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.120054 kubelet[2670]: E0912 23:49:21.120020 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.120421 kubelet[2670]: E0912 23:49:21.120384 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.120535 kubelet[2670]: W0912 23:49:21.120487 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.120605 kubelet[2670]: E0912 23:49:21.120593 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.121303 kubelet[2670]: E0912 23:49:21.120921 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.121381 kubelet[2670]: W0912 23:49:21.120938 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.121689 kubelet[2670]: E0912 23:49:21.121611 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.122089 kubelet[2670]: E0912 23:49:21.121988 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.122089 kubelet[2670]: W0912 23:49:21.122001 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.122089 kubelet[2670]: E0912 23:49:21.122011 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.122253 kubelet[2670]: E0912 23:49:21.122241 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.122315 kubelet[2670]: W0912 23:49:21.122305 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.122456 kubelet[2670]: E0912 23:49:21.122366 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.122554 kubelet[2670]: E0912 23:49:21.122543 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.122609 kubelet[2670]: W0912 23:49:21.122598 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.122770 kubelet[2670]: E0912 23:49:21.122650 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.122864 kubelet[2670]: E0912 23:49:21.122852 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.122914 kubelet[2670]: W0912 23:49:21.122903 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.122973 kubelet[2670]: E0912 23:49:21.122962 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.123270 kubelet[2670]: E0912 23:49:21.123189 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.123270 kubelet[2670]: W0912 23:49:21.123201 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.123270 kubelet[2670]: E0912 23:49:21.123210 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.123433 kubelet[2670]: E0912 23:49:21.123406 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.123433 kubelet[2670]: W0912 23:49:21.123428 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.123490 kubelet[2670]: E0912 23:49:21.123440 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.123808 kubelet[2670]: E0912 23:49:21.123759 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.123808 kubelet[2670]: W0912 23:49:21.123773 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.123808 kubelet[2670]: E0912 23:49:21.123783 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.130981 kubelet[2670]: E0912 23:49:21.130959 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:21.130981 kubelet[2670]: W0912 23:49:21.130977 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:21.131071 kubelet[2670]: E0912 23:49:21.130991 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:21.682662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903872546.mount: Deactivated successfully. Sep 12 23:49:22.980352 kubelet[2670]: E0912 23:49:22.980092 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6grrt" podUID="4b1d8eb9-d498-4006-84a4-c9d3a374aa3e" Sep 12 23:49:23.246503 containerd[1530]: time="2025-09-12T23:49:23.246450430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:23.247212 containerd[1530]: time="2025-09-12T23:49:23.247189830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 12 23:49:23.250008 containerd[1530]: time="2025-09-12T23:49:23.249667231Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:23.252148 containerd[1530]: time="2025-09-12T23:49:23.252085712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:23.252939 containerd[1530]: time="2025-09-12T23:49:23.252899073Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 2.508211626s" Sep 12 23:49:23.252939 containerd[1530]: time="2025-09-12T23:49:23.252934473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 12 23:49:23.254268 containerd[1530]: time="2025-09-12T23:49:23.254239994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 23:49:23.264430 containerd[1530]: time="2025-09-12T23:49:23.264128798Z" level=info msg="CreateContainer within sandbox \"225e782b46c43d3567f8ef6e8143e74fa5dd5644f11d6637e2d1eea57b40ee49\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 23:49:23.272758 containerd[1530]: time="2025-09-12T23:49:23.271911962Z" level=info msg="Container fdd73fb316d71113f686893c165bfd1c988d760e29d2ed65cc15f2f2efa6a6b4: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:23.281430 containerd[1530]: time="2025-09-12T23:49:23.281384006Z" level=info msg="CreateContainer within sandbox \"225e782b46c43d3567f8ef6e8143e74fa5dd5644f11d6637e2d1eea57b40ee49\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"fdd73fb316d71113f686893c165bfd1c988d760e29d2ed65cc15f2f2efa6a6b4\"" Sep 12 23:49:23.281913 containerd[1530]: time="2025-09-12T23:49:23.281888887Z" level=info msg="StartContainer for \"fdd73fb316d71113f686893c165bfd1c988d760e29d2ed65cc15f2f2efa6a6b4\"" Sep 12 23:49:23.282985 containerd[1530]: time="2025-09-12T23:49:23.282962167Z" level=info msg="connecting to shim fdd73fb316d71113f686893c165bfd1c988d760e29d2ed65cc15f2f2efa6a6b4" address="unix:///run/containerd/s/31c860b8d31e2e20fcdf3fab595ea32000c44e3361fa98152d933194b57840b0" protocol=ttrpc version=3 Sep 12 23:49:23.305388 systemd[1]: Started cri-containerd-fdd73fb316d71113f686893c165bfd1c988d760e29d2ed65cc15f2f2efa6a6b4.scope - libcontainer container fdd73fb316d71113f686893c165bfd1c988d760e29d2ed65cc15f2f2efa6a6b4. Sep 12 23:49:23.360607 containerd[1530]: time="2025-09-12T23:49:23.360568604Z" level=info msg="StartContainer for \"fdd73fb316d71113f686893c165bfd1c988d760e29d2ed65cc15f2f2efa6a6b4\" returns successfully" Sep 12 23:49:24.111504 kubelet[2670]: E0912 23:49:24.111459 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.111504 kubelet[2670]: W0912 23:49:24.111487 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.111504 kubelet[2670]: E0912 23:49:24.111507 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.111893 kubelet[2670]: E0912 23:49:24.111740 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.111893 kubelet[2670]: W0912 23:49:24.111749 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.111893 kubelet[2670]: E0912 23:49:24.111795 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.111973 kubelet[2670]: E0912 23:49:24.111952 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.111973 kubelet[2670]: W0912 23:49:24.111960 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.111973 kubelet[2670]: E0912 23:49:24.111968 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.112171 kubelet[2670]: E0912 23:49:24.112159 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.112171 kubelet[2670]: W0912 23:49:24.112171 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.112232 kubelet[2670]: E0912 23:49:24.112179 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.112403 kubelet[2670]: E0912 23:49:24.112371 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.112403 kubelet[2670]: W0912 23:49:24.112387 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.112403 kubelet[2670]: E0912 23:49:24.112396 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.112571 kubelet[2670]: E0912 23:49:24.112558 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.112598 kubelet[2670]: W0912 23:49:24.112570 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.112598 kubelet[2670]: E0912 23:49:24.112578 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.112736 kubelet[2670]: E0912 23:49:24.112724 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.112757 kubelet[2670]: W0912 23:49:24.112736 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.112757 kubelet[2670]: E0912 23:49:24.112744 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.112884 kubelet[2670]: E0912 23:49:24.112874 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.112905 kubelet[2670]: W0912 23:49:24.112886 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.112905 kubelet[2670]: E0912 23:49:24.112894 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.113067 kubelet[2670]: E0912 23:49:24.113055 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.113091 kubelet[2670]: W0912 23:49:24.113067 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.113091 kubelet[2670]: E0912 23:49:24.113075 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.113301 kubelet[2670]: E0912 23:49:24.113286 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.113332 kubelet[2670]: W0912 23:49:24.113300 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.113332 kubelet[2670]: E0912 23:49:24.113308 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.113539 kubelet[2670]: E0912 23:49:24.113524 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.113564 kubelet[2670]: W0912 23:49:24.113540 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.113564 kubelet[2670]: E0912 23:49:24.113549 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.113734 kubelet[2670]: E0912 23:49:24.113723 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.113759 kubelet[2670]: W0912 23:49:24.113736 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.113759 kubelet[2670]: E0912 23:49:24.113747 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.114008 kubelet[2670]: E0912 23:49:24.113992 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.114008 kubelet[2670]: W0912 23:49:24.114006 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.114063 kubelet[2670]: E0912 23:49:24.114016 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.114196 kubelet[2670]: E0912 23:49:24.114183 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.114196 kubelet[2670]: W0912 23:49:24.114195 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.114251 kubelet[2670]: E0912 23:49:24.114203 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.114495 kubelet[2670]: E0912 23:49:24.114463 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.114495 kubelet[2670]: W0912 23:49:24.114478 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.114495 kubelet[2670]: E0912 23:49:24.114487 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.140397 kubelet[2670]: E0912 23:49:24.140355 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.140397 kubelet[2670]: W0912 23:49:24.140380 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.140564 kubelet[2670]: E0912 23:49:24.140472 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.140813 kubelet[2670]: E0912 23:49:24.140770 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.140864 kubelet[2670]: W0912 23:49:24.140852 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.140896 kubelet[2670]: E0912 23:49:24.140868 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.141299 kubelet[2670]: E0912 23:49:24.141163 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.141299 kubelet[2670]: W0912 23:49:24.141181 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.141299 kubelet[2670]: E0912 23:49:24.141194 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.141606 kubelet[2670]: E0912 23:49:24.141591 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.141682 kubelet[2670]: W0912 23:49:24.141671 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.141738 kubelet[2670]: E0912 23:49:24.141728 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.141972 kubelet[2670]: E0912 23:49:24.141960 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.142151 kubelet[2670]: W0912 23:49:24.142038 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.142151 kubelet[2670]: E0912 23:49:24.142054 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.142327 kubelet[2670]: E0912 23:49:24.142316 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.142379 kubelet[2670]: W0912 23:49:24.142369 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.142425 kubelet[2670]: E0912 23:49:24.142416 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.142707 kubelet[2670]: E0912 23:49:24.142644 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.142707 kubelet[2670]: W0912 23:49:24.142657 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.142707 kubelet[2670]: E0912 23:49:24.142667 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.142940 kubelet[2670]: E0912 23:49:24.142929 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.143007 kubelet[2670]: W0912 23:49:24.142996 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.143061 kubelet[2670]: E0912 23:49:24.143052 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.143321 kubelet[2670]: E0912 23:49:24.143309 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.143623 kubelet[2670]: W0912 23:49:24.143393 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.143623 kubelet[2670]: E0912 23:49:24.143409 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.143758 kubelet[2670]: E0912 23:49:24.143735 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.143799 kubelet[2670]: W0912 23:49:24.143758 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.143799 kubelet[2670]: E0912 23:49:24.143770 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.144150 kubelet[2670]: E0912 23:49:24.144125 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.144150 kubelet[2670]: W0912 23:49:24.144148 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.144266 kubelet[2670]: E0912 23:49:24.144158 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.144473 kubelet[2670]: E0912 23:49:24.144459 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.144509 kubelet[2670]: W0912 23:49:24.144470 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.144509 kubelet[2670]: E0912 23:49:24.144484 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.144881 kubelet[2670]: E0912 23:49:24.144867 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.144881 kubelet[2670]: W0912 23:49:24.144880 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.144943 kubelet[2670]: E0912 23:49:24.144890 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.145188 kubelet[2670]: E0912 23:49:24.145174 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.145188 kubelet[2670]: W0912 23:49:24.145186 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.145248 kubelet[2670]: E0912 23:49:24.145197 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.145517 kubelet[2670]: E0912 23:49:24.145444 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.145517 kubelet[2670]: W0912 23:49:24.145457 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.145517 kubelet[2670]: E0912 23:49:24.145468 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.145893 kubelet[2670]: E0912 23:49:24.145757 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.145893 kubelet[2670]: W0912 23:49:24.145769 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.145893 kubelet[2670]: E0912 23:49:24.145778 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.146039 kubelet[2670]: E0912 23:49:24.146022 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.146039 kubelet[2670]: W0912 23:49:24.146038 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.146105 kubelet[2670]: E0912 23:49:24.146049 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.146331 kubelet[2670]: E0912 23:49:24.146318 2670 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 23:49:24.146429 kubelet[2670]: W0912 23:49:24.146389 2670 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 23:49:24.146429 kubelet[2670]: E0912 23:49:24.146404 2670 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 23:49:24.285439 containerd[1530]: time="2025-09-12T23:49:24.285381352Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:24.286981 containerd[1530]: time="2025-09-12T23:49:24.286590272Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 12 23:49:24.288659 containerd[1530]: time="2025-09-12T23:49:24.288629553Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:24.292158 containerd[1530]: time="2025-09-12T23:49:24.292049315Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.037777521s" Sep 12 23:49:24.292425 containerd[1530]: time="2025-09-12T23:49:24.292401995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 12 23:49:24.293266 containerd[1530]: time="2025-09-12T23:49:24.293244355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:24.297351 containerd[1530]: time="2025-09-12T23:49:24.297318237Z" level=info msg="CreateContainer within sandbox \"2489271798a602295c01024dd58d1d17eb9b36ce4261ebb2a9e9d89e608f1d8c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 23:49:24.308170 containerd[1530]: time="2025-09-12T23:49:24.307077881Z" level=info msg="Container 1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:24.316874 containerd[1530]: time="2025-09-12T23:49:24.316829046Z" level=info msg="CreateContainer within sandbox \"2489271798a602295c01024dd58d1d17eb9b36ce4261ebb2a9e9d89e608f1d8c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf\"" Sep 12 23:49:24.317569 containerd[1530]: time="2025-09-12T23:49:24.317526566Z" level=info msg="StartContainer for \"1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf\"" Sep 12 23:49:24.319315 containerd[1530]: time="2025-09-12T23:49:24.319281287Z" level=info msg="connecting to shim 1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf" address="unix:///run/containerd/s/eee8861131537a42bbfe385255acb2d4c369cd18b90eab58814e1d6f39d5ed61" protocol=ttrpc version=3 Sep 12 23:49:24.354365 systemd[1]: Started cri-containerd-1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf.scope - libcontainer container 1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf. Sep 12 23:49:24.395900 containerd[1530]: time="2025-09-12T23:49:24.395781641Z" level=info msg="StartContainer for \"1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf\" returns successfully" Sep 12 23:49:24.414360 systemd[1]: cri-containerd-1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf.scope: Deactivated successfully. Sep 12 23:49:24.414651 systemd[1]: cri-containerd-1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf.scope: Consumed 30ms CPU time, 6.2M memory peak, 4.5M written to disk. Sep 12 23:49:24.493204 containerd[1530]: time="2025-09-12T23:49:24.493119564Z" level=info msg="received exit event container_id:\"1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf\" id:\"1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf\" pid:3372 exited_at:{seconds:1757720964 nanos:425278934}" Sep 12 23:49:24.504288 containerd[1530]: time="2025-09-12T23:49:24.504234369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf\" id:\"1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf\" pid:3372 exited_at:{seconds:1757720964 nanos:425278934}" Sep 12 23:49:24.527315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d65e3a299727edb9b06b3c32d5e41365ef42e4abdcdc61f2b122f643b85ebdf-rootfs.mount: Deactivated successfully. Sep 12 23:49:24.974276 kubelet[2670]: E0912 23:49:24.974113 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6grrt" podUID="4b1d8eb9-d498-4006-84a4-c9d3a374aa3e" Sep 12 23:49:25.051685 kubelet[2670]: I0912 23:49:25.051235 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:25.053241 containerd[1530]: time="2025-09-12T23:49:25.053199370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 23:49:25.072965 kubelet[2670]: I0912 23:49:25.072893 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5c99ccfb99-l68gk" podStartSLOduration=2.557864348 podStartE2EDuration="5.072875538s" podCreationTimestamp="2025-09-12 23:49:20 +0000 UTC" firstStartedPulling="2025-09-12 23:49:20.738595403 +0000 UTC m=+20.861910051" lastFinishedPulling="2025-09-12 23:49:23.253606553 +0000 UTC m=+23.376921241" observedRunningTime="2025-09-12 23:49:24.065490854 +0000 UTC m=+24.188805582" watchObservedRunningTime="2025-09-12 23:49:25.072875538 +0000 UTC m=+25.196190226" Sep 12 23:49:26.910358 containerd[1530]: time="2025-09-12T23:49:26.910315637Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:26.910859 containerd[1530]: time="2025-09-12T23:49:26.910831317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 12 23:49:26.912225 containerd[1530]: time="2025-09-12T23:49:26.912198877Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:26.914182 containerd[1530]: time="2025-09-12T23:49:26.914036678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:26.915313 containerd[1530]: time="2025-09-12T23:49:26.914848438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 1.861605988s" Sep 12 23:49:26.915313 containerd[1530]: time="2025-09-12T23:49:26.914884678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 12 23:49:26.919746 containerd[1530]: time="2025-09-12T23:49:26.919712760Z" level=info msg="CreateContainer within sandbox \"2489271798a602295c01024dd58d1d17eb9b36ce4261ebb2a9e9d89e608f1d8c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 23:49:26.934180 containerd[1530]: time="2025-09-12T23:49:26.933673406Z" level=info msg="Container 0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:26.945687 containerd[1530]: time="2025-09-12T23:49:26.945631690Z" level=info msg="CreateContainer within sandbox \"2489271798a602295c01024dd58d1d17eb9b36ce4261ebb2a9e9d89e608f1d8c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9\"" Sep 12 23:49:26.946657 containerd[1530]: time="2025-09-12T23:49:26.946630571Z" level=info msg="StartContainer for \"0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9\"" Sep 12 23:49:26.948254 containerd[1530]: time="2025-09-12T23:49:26.948228011Z" level=info msg="connecting to shim 0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9" address="unix:///run/containerd/s/eee8861131537a42bbfe385255acb2d4c369cd18b90eab58814e1d6f39d5ed61" protocol=ttrpc version=3 Sep 12 23:49:26.967403 systemd[1]: Started cri-containerd-0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9.scope - libcontainer container 0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9. Sep 12 23:49:26.974714 kubelet[2670]: E0912 23:49:26.974376 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6grrt" podUID="4b1d8eb9-d498-4006-84a4-c9d3a374aa3e" Sep 12 23:49:27.040354 containerd[1530]: time="2025-09-12T23:49:27.040220766Z" level=info msg="StartContainer for \"0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9\" returns successfully" Sep 12 23:49:27.497667 systemd[1]: cri-containerd-0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9.scope: Deactivated successfully. Sep 12 23:49:27.499209 systemd[1]: cri-containerd-0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9.scope: Consumed 453ms CPU time, 172.4M memory peak, 3.3M read from disk, 165.8M written to disk. Sep 12 23:49:27.501250 containerd[1530]: time="2025-09-12T23:49:27.501208894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9\" id:\"0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9\" pid:3432 exited_at:{seconds:1757720967 nanos:499454774}" Sep 12 23:49:27.507122 containerd[1530]: time="2025-09-12T23:49:27.506937776Z" level=info msg="received exit event container_id:\"0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9\" id:\"0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9\" pid:3432 exited_at:{seconds:1757720967 nanos:499454774}" Sep 12 23:49:27.526035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0dd43e26e4b1b2242decdfdd3a8d65d5ff7feb4b0232d09eaa332ca17d3a46b9-rootfs.mount: Deactivated successfully. Sep 12 23:49:27.571720 kubelet[2670]: I0912 23:49:27.571658 2670 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 23:49:27.727801 systemd[1]: Created slice kubepods-burstable-podfe0b7f3e_00e2_40e2_8217_c68c41916a8a.slice - libcontainer container kubepods-burstable-podfe0b7f3e_00e2_40e2_8217_c68c41916a8a.slice. Sep 12 23:49:27.743559 systemd[1]: Created slice kubepods-besteffort-pode0ae7764_8dc3_457b_a3e1_0c3d7d7f329e.slice - libcontainer container kubepods-besteffort-pode0ae7764_8dc3_457b_a3e1_0c3d7d7f329e.slice. Sep 12 23:49:27.765427 kubelet[2670]: I0912 23:49:27.764787 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd9ht\" (UniqueName: \"kubernetes.io/projected/abe6524f-32ea-4fa9-86c5-c919366ca996-kube-api-access-pd9ht\") pod \"calico-apiserver-6c649579cc-bn98f\" (UID: \"abe6524f-32ea-4fa9-86c5-c919366ca996\") " pod="calico-apiserver/calico-apiserver-6c649579cc-bn98f" Sep 12 23:49:27.765427 kubelet[2670]: I0912 23:49:27.764829 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a-goldmane-key-pair\") pod \"goldmane-54d579b49d-jbcmc\" (UID: \"ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a\") " pod="calico-system/goldmane-54d579b49d-jbcmc" Sep 12 23:49:27.765427 kubelet[2670]: I0912 23:49:27.764918 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgx6r\" (UniqueName: \"kubernetes.io/projected/ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a-kube-api-access-vgx6r\") pod \"goldmane-54d579b49d-jbcmc\" (UID: \"ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a\") " pod="calico-system/goldmane-54d579b49d-jbcmc" Sep 12 23:49:27.765427 kubelet[2670]: I0912 23:49:27.764984 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8q67\" (UniqueName: \"kubernetes.io/projected/e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e-kube-api-access-d8q67\") pod \"calico-kube-controllers-6f67f4748b-597zd\" (UID: \"e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e\") " pod="calico-system/calico-kube-controllers-6f67f4748b-597zd" Sep 12 23:49:27.765427 kubelet[2670]: I0912 23:49:27.765008 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hp9m\" (UniqueName: \"kubernetes.io/projected/444f5932-6f41-49ae-b1ad-2d15103b287b-kube-api-access-6hp9m\") pod \"coredns-674b8bbfcf-rvcjx\" (UID: \"444f5932-6f41-49ae-b1ad-2d15103b287b\") " pod="kube-system/coredns-674b8bbfcf-rvcjx" Sep 12 23:49:27.765618 kubelet[2670]: I0912 23:49:27.765024 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zv2vr\" (UniqueName: \"kubernetes.io/projected/fe0b7f3e-00e2-40e2-8217-c68c41916a8a-kube-api-access-zv2vr\") pod \"coredns-674b8bbfcf-4ljq7\" (UID: \"fe0b7f3e-00e2-40e2-8217-c68c41916a8a\") " pod="kube-system/coredns-674b8bbfcf-4ljq7" Sep 12 23:49:27.765618 kubelet[2670]: I0912 23:49:27.765043 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fe0b7f3e-00e2-40e2-8217-c68c41916a8a-config-volume\") pod \"coredns-674b8bbfcf-4ljq7\" (UID: \"fe0b7f3e-00e2-40e2-8217-c68c41916a8a\") " pod="kube-system/coredns-674b8bbfcf-4ljq7" Sep 12 23:49:27.765618 kubelet[2670]: I0912 23:49:27.765068 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e-tigera-ca-bundle\") pod \"calico-kube-controllers-6f67f4748b-597zd\" (UID: \"e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e\") " pod="calico-system/calico-kube-controllers-6f67f4748b-597zd" Sep 12 23:49:27.765618 kubelet[2670]: I0912 23:49:27.765092 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/abe6524f-32ea-4fa9-86c5-c919366ca996-calico-apiserver-certs\") pod \"calico-apiserver-6c649579cc-bn98f\" (UID: \"abe6524f-32ea-4fa9-86c5-c919366ca996\") " pod="calico-apiserver/calico-apiserver-6c649579cc-bn98f" Sep 12 23:49:27.765618 kubelet[2670]: I0912 23:49:27.765107 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a-config\") pod \"goldmane-54d579b49d-jbcmc\" (UID: \"ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a\") " pod="calico-system/goldmane-54d579b49d-jbcmc" Sep 12 23:49:27.765545 systemd[1]: Created slice kubepods-besteffort-podabe6524f_32ea_4fa9_86c5_c919366ca996.slice - libcontainer container kubepods-besteffort-podabe6524f_32ea_4fa9_86c5_c919366ca996.slice. Sep 12 23:49:27.765790 kubelet[2670]: I0912 23:49:27.765121 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-jbcmc\" (UID: \"ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a\") " pod="calico-system/goldmane-54d579b49d-jbcmc" Sep 12 23:49:27.765790 kubelet[2670]: I0912 23:49:27.765164 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/444f5932-6f41-49ae-b1ad-2d15103b287b-config-volume\") pod \"coredns-674b8bbfcf-rvcjx\" (UID: \"444f5932-6f41-49ae-b1ad-2d15103b287b\") " pod="kube-system/coredns-674b8bbfcf-rvcjx" Sep 12 23:49:27.774276 systemd[1]: Created slice kubepods-besteffort-poded6fc0a2_bf30_4b3f_a476_7f220b6cb40a.slice - libcontainer container kubepods-besteffort-poded6fc0a2_bf30_4b3f_a476_7f220b6cb40a.slice. Sep 12 23:49:27.783452 systemd[1]: Created slice kubepods-burstable-pod444f5932_6f41_49ae_b1ad_2d15103b287b.slice - libcontainer container kubepods-burstable-pod444f5932_6f41_49ae_b1ad_2d15103b287b.slice. Sep 12 23:49:27.790913 systemd[1]: Created slice kubepods-besteffort-podd22db48d_3cb6_4db1_8c4d_e3eb64653e01.slice - libcontainer container kubepods-besteffort-podd22db48d_3cb6_4db1_8c4d_e3eb64653e01.slice. Sep 12 23:49:27.800096 systemd[1]: Created slice kubepods-besteffort-pod02f97b23_2b5e_4cdd_a600_f252898d3452.slice - libcontainer container kubepods-besteffort-pod02f97b23_2b5e_4cdd_a600_f252898d3452.slice. Sep 12 23:49:27.866046 kubelet[2670]: I0912 23:49:27.866004 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5hb6\" (UniqueName: \"kubernetes.io/projected/02f97b23-2b5e-4cdd-a600-f252898d3452-kube-api-access-x5hb6\") pod \"whisker-869f84b888-xlf75\" (UID: \"02f97b23-2b5e-4cdd-a600-f252898d3452\") " pod="calico-system/whisker-869f84b888-xlf75" Sep 12 23:49:27.866046 kubelet[2670]: I0912 23:49:27.866095 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/02f97b23-2b5e-4cdd-a600-f252898d3452-whisker-backend-key-pair\") pod \"whisker-869f84b888-xlf75\" (UID: \"02f97b23-2b5e-4cdd-a600-f252898d3452\") " pod="calico-system/whisker-869f84b888-xlf75" Sep 12 23:49:27.866046 kubelet[2670]: I0912 23:49:27.866127 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d22db48d-3cb6-4db1-8c4d-e3eb64653e01-calico-apiserver-certs\") pod \"calico-apiserver-6c649579cc-twspx\" (UID: \"d22db48d-3cb6-4db1-8c4d-e3eb64653e01\") " pod="calico-apiserver/calico-apiserver-6c649579cc-twspx" Sep 12 23:49:27.866046 kubelet[2670]: I0912 23:49:27.866170 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8xnv\" (UniqueName: \"kubernetes.io/projected/d22db48d-3cb6-4db1-8c4d-e3eb64653e01-kube-api-access-z8xnv\") pod \"calico-apiserver-6c649579cc-twspx\" (UID: \"d22db48d-3cb6-4db1-8c4d-e3eb64653e01\") " pod="calico-apiserver/calico-apiserver-6c649579cc-twspx" Sep 12 23:49:27.866046 kubelet[2670]: I0912 23:49:27.866217 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02f97b23-2b5e-4cdd-a600-f252898d3452-whisker-ca-bundle\") pod \"whisker-869f84b888-xlf75\" (UID: \"02f97b23-2b5e-4cdd-a600-f252898d3452\") " pod="calico-system/whisker-869f84b888-xlf75" Sep 12 23:49:28.038561 containerd[1530]: time="2025-09-12T23:49:28.038456609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4ljq7,Uid:fe0b7f3e-00e2-40e2-8217-c68c41916a8a,Namespace:kube-system,Attempt:0,}" Sep 12 23:49:28.064699 containerd[1530]: time="2025-09-12T23:49:28.064583098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 23:49:28.067287 containerd[1530]: time="2025-09-12T23:49:28.067239219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f67f4748b-597zd,Uid:e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:28.083519 containerd[1530]: time="2025-09-12T23:49:28.083471425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jbcmc,Uid:ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:28.088887 containerd[1530]: time="2025-09-12T23:49:28.088791186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c649579cc-bn98f,Uid:abe6524f-32ea-4fa9-86c5-c919366ca996,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:49:28.101417 containerd[1530]: time="2025-09-12T23:49:28.101066071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c649579cc-twspx,Uid:d22db48d-3cb6-4db1-8c4d-e3eb64653e01,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:49:28.101895 containerd[1530]: time="2025-09-12T23:49:28.101868511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rvcjx,Uid:444f5932-6f41-49ae-b1ad-2d15103b287b,Namespace:kube-system,Attempt:0,}" Sep 12 23:49:28.104093 containerd[1530]: time="2025-09-12T23:49:28.104061752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-869f84b888-xlf75,Uid:02f97b23-2b5e-4cdd-a600-f252898d3452,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:28.184058 containerd[1530]: time="2025-09-12T23:49:28.184001939Z" level=error msg="Failed to destroy network for sandbox \"20f53b206f2edc56ed6f201546b9eec79ca281d84f3e03bb3a90377a228c795b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.187985 containerd[1530]: time="2025-09-12T23:49:28.187641660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jbcmc,Uid:ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20f53b206f2edc56ed6f201546b9eec79ca281d84f3e03bb3a90377a228c795b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.190044 kubelet[2670]: E0912 23:49:28.189995 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20f53b206f2edc56ed6f201546b9eec79ca281d84f3e03bb3a90377a228c795b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.191502 kubelet[2670]: E0912 23:49:28.190305 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20f53b206f2edc56ed6f201546b9eec79ca281d84f3e03bb3a90377a228c795b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-jbcmc" Sep 12 23:49:28.191502 kubelet[2670]: E0912 23:49:28.190333 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20f53b206f2edc56ed6f201546b9eec79ca281d84f3e03bb3a90377a228c795b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-jbcmc" Sep 12 23:49:28.191502 kubelet[2670]: E0912 23:49:28.190390 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-jbcmc_calico-system(ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-jbcmc_calico-system(ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20f53b206f2edc56ed6f201546b9eec79ca281d84f3e03bb3a90377a228c795b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-jbcmc" podUID="ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a" Sep 12 23:49:28.208368 containerd[1530]: time="2025-09-12T23:49:28.208309747Z" level=error msg="Failed to destroy network for sandbox \"d30db110e59dc273ee803a8ef6eea85ab34f6c6e75687dd7b2756a97ddd7186a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.208495 containerd[1530]: time="2025-09-12T23:49:28.208370347Z" level=error msg="Failed to destroy network for sandbox \"819f96f2fa4b6e935f7e62a62e8aa39617976f719be8c32cd3af39a80e6baa9e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.211549 containerd[1530]: time="2025-09-12T23:49:28.211457908Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f67f4748b-597zd,Uid:e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30db110e59dc273ee803a8ef6eea85ab34f6c6e75687dd7b2756a97ddd7186a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.211853 kubelet[2670]: E0912 23:49:28.211818 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30db110e59dc273ee803a8ef6eea85ab34f6c6e75687dd7b2756a97ddd7186a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.211990 kubelet[2670]: E0912 23:49:28.211972 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30db110e59dc273ee803a8ef6eea85ab34f6c6e75687dd7b2756a97ddd7186a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f67f4748b-597zd" Sep 12 23:49:28.212218 kubelet[2670]: E0912 23:49:28.212049 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d30db110e59dc273ee803a8ef6eea85ab34f6c6e75687dd7b2756a97ddd7186a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f67f4748b-597zd" Sep 12 23:49:28.212341 kubelet[2670]: E0912 23:49:28.212126 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f67f4748b-597zd_calico-system(e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f67f4748b-597zd_calico-system(e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d30db110e59dc273ee803a8ef6eea85ab34f6c6e75687dd7b2756a97ddd7186a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f67f4748b-597zd" podUID="e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e" Sep 12 23:49:28.216439 containerd[1530]: time="2025-09-12T23:49:28.216344990Z" level=error msg="Failed to destroy network for sandbox \"7b0876db4315ade3dce6b5a049633b2e9eb1315a8bb64b6ba414c315b18b56d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.217737 containerd[1530]: time="2025-09-12T23:49:28.217694750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4ljq7,Uid:fe0b7f3e-00e2-40e2-8217-c68c41916a8a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"819f96f2fa4b6e935f7e62a62e8aa39617976f719be8c32cd3af39a80e6baa9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.218211 kubelet[2670]: E0912 23:49:28.217911 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"819f96f2fa4b6e935f7e62a62e8aa39617976f719be8c32cd3af39a80e6baa9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.218211 kubelet[2670]: E0912 23:49:28.217987 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"819f96f2fa4b6e935f7e62a62e8aa39617976f719be8c32cd3af39a80e6baa9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4ljq7" Sep 12 23:49:28.218211 kubelet[2670]: E0912 23:49:28.218006 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"819f96f2fa4b6e935f7e62a62e8aa39617976f719be8c32cd3af39a80e6baa9e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-4ljq7" Sep 12 23:49:28.218345 kubelet[2670]: E0912 23:49:28.218053 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-4ljq7_kube-system(fe0b7f3e-00e2-40e2-8217-c68c41916a8a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-4ljq7_kube-system(fe0b7f3e-00e2-40e2-8217-c68c41916a8a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"819f96f2fa4b6e935f7e62a62e8aa39617976f719be8c32cd3af39a80e6baa9e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-4ljq7" podUID="fe0b7f3e-00e2-40e2-8217-c68c41916a8a" Sep 12 23:49:28.218391 containerd[1530]: time="2025-09-12T23:49:28.218222031Z" level=error msg="Failed to destroy network for sandbox \"873fc5aa4b45cbd4af3b23a8f5237c84587fec9cd13a4800079be76cd98c0d6c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.225536 containerd[1530]: time="2025-09-12T23:49:28.225422753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c649579cc-bn98f,Uid:abe6524f-32ea-4fa9-86c5-c919366ca996,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b0876db4315ade3dce6b5a049633b2e9eb1315a8bb64b6ba414c315b18b56d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.226159 kubelet[2670]: E0912 23:49:28.225811 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b0876db4315ade3dce6b5a049633b2e9eb1315a8bb64b6ba414c315b18b56d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.226159 kubelet[2670]: E0912 23:49:28.225886 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b0876db4315ade3dce6b5a049633b2e9eb1315a8bb64b6ba414c315b18b56d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c649579cc-bn98f" Sep 12 23:49:28.226159 kubelet[2670]: E0912 23:49:28.225942 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b0876db4315ade3dce6b5a049633b2e9eb1315a8bb64b6ba414c315b18b56d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c649579cc-bn98f" Sep 12 23:49:28.227542 kubelet[2670]: E0912 23:49:28.225995 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c649579cc-bn98f_calico-apiserver(abe6524f-32ea-4fa9-86c5-c919366ca996)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c649579cc-bn98f_calico-apiserver(abe6524f-32ea-4fa9-86c5-c919366ca996)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b0876db4315ade3dce6b5a049633b2e9eb1315a8bb64b6ba414c315b18b56d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c649579cc-bn98f" podUID="abe6524f-32ea-4fa9-86c5-c919366ca996" Sep 12 23:49:28.227647 containerd[1530]: time="2025-09-12T23:49:28.226286713Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-869f84b888-xlf75,Uid:02f97b23-2b5e-4cdd-a600-f252898d3452,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"873fc5aa4b45cbd4af3b23a8f5237c84587fec9cd13a4800079be76cd98c0d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.227817 kubelet[2670]: E0912 23:49:28.227786 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873fc5aa4b45cbd4af3b23a8f5237c84587fec9cd13a4800079be76cd98c0d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.228096 kubelet[2670]: E0912 23:49:28.228011 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873fc5aa4b45cbd4af3b23a8f5237c84587fec9cd13a4800079be76cd98c0d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-869f84b888-xlf75" Sep 12 23:49:28.228096 kubelet[2670]: E0912 23:49:28.228038 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873fc5aa4b45cbd4af3b23a8f5237c84587fec9cd13a4800079be76cd98c0d6c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-869f84b888-xlf75" Sep 12 23:49:28.228322 kubelet[2670]: E0912 23:49:28.228273 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-869f84b888-xlf75_calico-system(02f97b23-2b5e-4cdd-a600-f252898d3452)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-869f84b888-xlf75_calico-system(02f97b23-2b5e-4cdd-a600-f252898d3452)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"873fc5aa4b45cbd4af3b23a8f5237c84587fec9cd13a4800079be76cd98c0d6c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-869f84b888-xlf75" podUID="02f97b23-2b5e-4cdd-a600-f252898d3452" Sep 12 23:49:28.228398 containerd[1530]: time="2025-09-12T23:49:28.228307914Z" level=error msg="Failed to destroy network for sandbox \"82bc1cee40df70e9f6871e5449a77b735b0bbd78969406c2de867c69bd08c8a4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.229689 containerd[1530]: time="2025-09-12T23:49:28.229599155Z" level=error msg="Failed to destroy network for sandbox \"023e358def3377b9c3f8a24285d00c953897ea68b7008a620c6a8dfcb4d4afa4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.229988 containerd[1530]: time="2025-09-12T23:49:28.229959635Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rvcjx,Uid:444f5932-6f41-49ae-b1ad-2d15103b287b,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82bc1cee40df70e9f6871e5449a77b735b0bbd78969406c2de867c69bd08c8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.230195 kubelet[2670]: E0912 23:49:28.230133 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82bc1cee40df70e9f6871e5449a77b735b0bbd78969406c2de867c69bd08c8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.230459 kubelet[2670]: E0912 23:49:28.230322 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82bc1cee40df70e9f6871e5449a77b735b0bbd78969406c2de867c69bd08c8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rvcjx" Sep 12 23:49:28.230523 kubelet[2670]: E0912 23:49:28.230346 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82bc1cee40df70e9f6871e5449a77b735b0bbd78969406c2de867c69bd08c8a4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rvcjx" Sep 12 23:49:28.230599 kubelet[2670]: E0912 23:49:28.230533 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rvcjx_kube-system(444f5932-6f41-49ae-b1ad-2d15103b287b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rvcjx_kube-system(444f5932-6f41-49ae-b1ad-2d15103b287b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82bc1cee40df70e9f6871e5449a77b735b0bbd78969406c2de867c69bd08c8a4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rvcjx" podUID="444f5932-6f41-49ae-b1ad-2d15103b287b" Sep 12 23:49:28.232753 containerd[1530]: time="2025-09-12T23:49:28.232702836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c649579cc-twspx,Uid:d22db48d-3cb6-4db1-8c4d-e3eb64653e01,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"023e358def3377b9c3f8a24285d00c953897ea68b7008a620c6a8dfcb4d4afa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.232937 kubelet[2670]: E0912 23:49:28.232908 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"023e358def3377b9c3f8a24285d00c953897ea68b7008a620c6a8dfcb4d4afa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:28.233135 kubelet[2670]: E0912 23:49:28.233035 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"023e358def3377b9c3f8a24285d00c953897ea68b7008a620c6a8dfcb4d4afa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c649579cc-twspx" Sep 12 23:49:28.233135 kubelet[2670]: E0912 23:49:28.233057 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"023e358def3377b9c3f8a24285d00c953897ea68b7008a620c6a8dfcb4d4afa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c649579cc-twspx" Sep 12 23:49:28.233135 kubelet[2670]: E0912 23:49:28.233099 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c649579cc-twspx_calico-apiserver(d22db48d-3cb6-4db1-8c4d-e3eb64653e01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c649579cc-twspx_calico-apiserver(d22db48d-3cb6-4db1-8c4d-e3eb64653e01)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"023e358def3377b9c3f8a24285d00c953897ea68b7008a620c6a8dfcb4d4afa4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c649579cc-twspx" podUID="d22db48d-3cb6-4db1-8c4d-e3eb64653e01" Sep 12 23:49:28.980500 systemd[1]: Created slice kubepods-besteffort-pod4b1d8eb9_d498_4006_84a4_c9d3a374aa3e.slice - libcontainer container kubepods-besteffort-pod4b1d8eb9_d498_4006_84a4_c9d3a374aa3e.slice. Sep 12 23:49:28.983681 containerd[1530]: time="2025-09-12T23:49:28.983430892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6grrt,Uid:4b1d8eb9-d498-4006-84a4-c9d3a374aa3e,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:29.051845 containerd[1530]: time="2025-09-12T23:49:29.051798954Z" level=error msg="Failed to destroy network for sandbox \"a02b44547745e01ec172076c54157f51aab18f55261f794b5b41d4b43c111997\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:29.055288 containerd[1530]: time="2025-09-12T23:49:29.055054276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6grrt,Uid:4b1d8eb9-d498-4006-84a4-c9d3a374aa3e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02b44547745e01ec172076c54157f51aab18f55261f794b5b41d4b43c111997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:29.055696 kubelet[2670]: E0912 23:49:29.055624 2670 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02b44547745e01ec172076c54157f51aab18f55261f794b5b41d4b43c111997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 23:49:29.055775 kubelet[2670]: E0912 23:49:29.055742 2670 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02b44547745e01ec172076c54157f51aab18f55261f794b5b41d4b43c111997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6grrt" Sep 12 23:49:29.055799 kubelet[2670]: E0912 23:49:29.055784 2670 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a02b44547745e01ec172076c54157f51aab18f55261f794b5b41d4b43c111997\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6grrt" Sep 12 23:49:29.055851 kubelet[2670]: E0912 23:49:29.055828 2670 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6grrt_calico-system(4b1d8eb9-d498-4006-84a4-c9d3a374aa3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6grrt_calico-system(4b1d8eb9-d498-4006-84a4-c9d3a374aa3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a02b44547745e01ec172076c54157f51aab18f55261f794b5b41d4b43c111997\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6grrt" podUID="4b1d8eb9-d498-4006-84a4-c9d3a374aa3e" Sep 12 23:49:29.056419 systemd[1]: run-netns-cni\x2dec0113e9\x2d5a7f\x2d27ed\x2d236e\x2d5ca0193292e1.mount: Deactivated successfully. Sep 12 23:49:31.207443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4127790107.mount: Deactivated successfully. Sep 12 23:49:31.259771 containerd[1530]: time="2025-09-12T23:49:31.259713712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:31.261470 containerd[1530]: time="2025-09-12T23:49:31.261427592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 12 23:49:31.262437 containerd[1530]: time="2025-09-12T23:49:31.262378713Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:31.264117 containerd[1530]: time="2025-09-12T23:49:31.264088833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:31.264577 containerd[1530]: time="2025-09-12T23:49:31.264552753Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.199929695s" Sep 12 23:49:31.264612 containerd[1530]: time="2025-09-12T23:49:31.264587033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 12 23:49:31.284548 containerd[1530]: time="2025-09-12T23:49:31.284501879Z" level=info msg="CreateContainer within sandbox \"2489271798a602295c01024dd58d1d17eb9b36ce4261ebb2a9e9d89e608f1d8c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 23:49:31.297181 containerd[1530]: time="2025-09-12T23:49:31.295867762Z" level=info msg="Container a5d512e61689f4c118cca2cb6893f13882fe7df182ffa8b4178a8b6b3f2223cd: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:31.297818 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1291769210.mount: Deactivated successfully. Sep 12 23:49:31.306876 containerd[1530]: time="2025-09-12T23:49:31.306748725Z" level=info msg="CreateContainer within sandbox \"2489271798a602295c01024dd58d1d17eb9b36ce4261ebb2a9e9d89e608f1d8c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a5d512e61689f4c118cca2cb6893f13882fe7df182ffa8b4178a8b6b3f2223cd\"" Sep 12 23:49:31.307565 containerd[1530]: time="2025-09-12T23:49:31.307541205Z" level=info msg="StartContainer for \"a5d512e61689f4c118cca2cb6893f13882fe7df182ffa8b4178a8b6b3f2223cd\"" Sep 12 23:49:31.316960 containerd[1530]: time="2025-09-12T23:49:31.316901728Z" level=info msg="connecting to shim a5d512e61689f4c118cca2cb6893f13882fe7df182ffa8b4178a8b6b3f2223cd" address="unix:///run/containerd/s/eee8861131537a42bbfe385255acb2d4c369cd18b90eab58814e1d6f39d5ed61" protocol=ttrpc version=3 Sep 12 23:49:31.338362 systemd[1]: Started cri-containerd-a5d512e61689f4c118cca2cb6893f13882fe7df182ffa8b4178a8b6b3f2223cd.scope - libcontainer container a5d512e61689f4c118cca2cb6893f13882fe7df182ffa8b4178a8b6b3f2223cd. Sep 12 23:49:31.374305 containerd[1530]: time="2025-09-12T23:49:31.374265144Z" level=info msg="StartContainer for \"a5d512e61689f4c118cca2cb6893f13882fe7df182ffa8b4178a8b6b3f2223cd\" returns successfully" Sep 12 23:49:31.494217 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 23:49:31.494332 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 23:49:31.800406 kubelet[2670]: I0912 23:49:31.800306 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5hb6\" (UniqueName: \"kubernetes.io/projected/02f97b23-2b5e-4cdd-a600-f252898d3452-kube-api-access-x5hb6\") pod \"02f97b23-2b5e-4cdd-a600-f252898d3452\" (UID: \"02f97b23-2b5e-4cdd-a600-f252898d3452\") " Sep 12 23:49:31.800406 kubelet[2670]: I0912 23:49:31.800353 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02f97b23-2b5e-4cdd-a600-f252898d3452-whisker-ca-bundle\") pod \"02f97b23-2b5e-4cdd-a600-f252898d3452\" (UID: \"02f97b23-2b5e-4cdd-a600-f252898d3452\") " Sep 12 23:49:31.800406 kubelet[2670]: I0912 23:49:31.800389 2670 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/02f97b23-2b5e-4cdd-a600-f252898d3452-whisker-backend-key-pair\") pod \"02f97b23-2b5e-4cdd-a600-f252898d3452\" (UID: \"02f97b23-2b5e-4cdd-a600-f252898d3452\") " Sep 12 23:49:31.815380 kubelet[2670]: I0912 23:49:31.815322 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/02f97b23-2b5e-4cdd-a600-f252898d3452-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "02f97b23-2b5e-4cdd-a600-f252898d3452" (UID: "02f97b23-2b5e-4cdd-a600-f252898d3452"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 23:49:31.816074 kubelet[2670]: I0912 23:49:31.816035 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02f97b23-2b5e-4cdd-a600-f252898d3452-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "02f97b23-2b5e-4cdd-a600-f252898d3452" (UID: "02f97b23-2b5e-4cdd-a600-f252898d3452"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 23:49:31.816277 kubelet[2670]: I0912 23:49:31.816240 2670 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02f97b23-2b5e-4cdd-a600-f252898d3452-kube-api-access-x5hb6" (OuterVolumeSpecName: "kube-api-access-x5hb6") pod "02f97b23-2b5e-4cdd-a600-f252898d3452" (UID: "02f97b23-2b5e-4cdd-a600-f252898d3452"). InnerVolumeSpecName "kube-api-access-x5hb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 23:49:31.900703 kubelet[2670]: I0912 23:49:31.900651 2670 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x5hb6\" (UniqueName: \"kubernetes.io/projected/02f97b23-2b5e-4cdd-a600-f252898d3452-kube-api-access-x5hb6\") on node \"localhost\" DevicePath \"\"" Sep 12 23:49:31.900703 kubelet[2670]: I0912 23:49:31.900689 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02f97b23-2b5e-4cdd-a600-f252898d3452-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 12 23:49:31.900703 kubelet[2670]: I0912 23:49:31.900698 2670 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/02f97b23-2b5e-4cdd-a600-f252898d3452-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 12 23:49:31.985921 systemd[1]: Removed slice kubepods-besteffort-pod02f97b23_2b5e_4cdd_a600_f252898d3452.slice - libcontainer container kubepods-besteffort-pod02f97b23_2b5e_4cdd_a600_f252898d3452.slice. Sep 12 23:49:32.102218 kubelet[2670]: I0912 23:49:32.102062 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xmx7s" podStartSLOduration=1.901950502 podStartE2EDuration="12.102046427s" podCreationTimestamp="2025-09-12 23:49:20 +0000 UTC" firstStartedPulling="2025-09-12 23:49:21.065152028 +0000 UTC m=+21.188466716" lastFinishedPulling="2025-09-12 23:49:31.265247953 +0000 UTC m=+31.388562641" observedRunningTime="2025-09-12 23:49:32.092383745 +0000 UTC m=+32.215698433" watchObservedRunningTime="2025-09-12 23:49:32.102046427 +0000 UTC m=+32.225361115" Sep 12 23:49:32.151074 systemd[1]: Created slice kubepods-besteffort-pod34dc20d8_df49_41ec_912c_7164fea7620d.slice - libcontainer container kubepods-besteffort-pod34dc20d8_df49_41ec_912c_7164fea7620d.slice. Sep 12 23:49:32.208611 systemd[1]: var-lib-kubelet-pods-02f97b23\x2d2b5e\x2d4cdd\x2da600\x2df252898d3452-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx5hb6.mount: Deactivated successfully. Sep 12 23:49:32.208699 systemd[1]: var-lib-kubelet-pods-02f97b23\x2d2b5e\x2d4cdd\x2da600\x2df252898d3452-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 23:49:32.303297 kubelet[2670]: I0912 23:49:32.303220 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/34dc20d8-df49-41ec-912c-7164fea7620d-whisker-ca-bundle\") pod \"whisker-6bcbff775f-lbxp5\" (UID: \"34dc20d8-df49-41ec-912c-7164fea7620d\") " pod="calico-system/whisker-6bcbff775f-lbxp5" Sep 12 23:49:32.303297 kubelet[2670]: I0912 23:49:32.303301 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj4m2\" (UniqueName: \"kubernetes.io/projected/34dc20d8-df49-41ec-912c-7164fea7620d-kube-api-access-fj4m2\") pod \"whisker-6bcbff775f-lbxp5\" (UID: \"34dc20d8-df49-41ec-912c-7164fea7620d\") " pod="calico-system/whisker-6bcbff775f-lbxp5" Sep 12 23:49:32.303462 kubelet[2670]: I0912 23:49:32.303380 2670 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/34dc20d8-df49-41ec-912c-7164fea7620d-whisker-backend-key-pair\") pod \"whisker-6bcbff775f-lbxp5\" (UID: \"34dc20d8-df49-41ec-912c-7164fea7620d\") " pod="calico-system/whisker-6bcbff775f-lbxp5" Sep 12 23:49:32.456463 containerd[1530]: time="2025-09-12T23:49:32.456114041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bcbff775f-lbxp5,Uid:34dc20d8-df49-41ec-912c-7164fea7620d,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:32.640929 systemd-networkd[1430]: califb64ce1d6d3: Link UP Sep 12 23:49:32.641194 systemd-networkd[1430]: califb64ce1d6d3: Gained carrier Sep 12 23:49:32.656357 containerd[1530]: 2025-09-12 23:49:32.488 [INFO][3804] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:32.656357 containerd[1530]: 2025-09-12 23:49:32.518 [INFO][3804] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--6bcbff775f--lbxp5-eth0 whisker-6bcbff775f- calico-system 34dc20d8-df49-41ec-912c-7164fea7620d 908 0 2025-09-12 23:49:32 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6bcbff775f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-6bcbff775f-lbxp5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califb64ce1d6d3 [] [] }} ContainerID="8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" Namespace="calico-system" Pod="whisker-6bcbff775f-lbxp5" WorkloadEndpoint="localhost-k8s-whisker--6bcbff775f--lbxp5-" Sep 12 23:49:32.656357 containerd[1530]: 2025-09-12 23:49:32.518 [INFO][3804] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" Namespace="calico-system" Pod="whisker-6bcbff775f-lbxp5" WorkloadEndpoint="localhost-k8s-whisker--6bcbff775f--lbxp5-eth0" Sep 12 23:49:32.656357 containerd[1530]: 2025-09-12 23:49:32.593 [INFO][3818] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" HandleID="k8s-pod-network.8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" Workload="localhost-k8s-whisker--6bcbff775f--lbxp5-eth0" Sep 12 23:49:32.656590 containerd[1530]: 2025-09-12 23:49:32.593 [INFO][3818] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" HandleID="k8s-pod-network.8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" Workload="localhost-k8s-whisker--6bcbff775f--lbxp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400047d880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-6bcbff775f-lbxp5", "timestamp":"2025-09-12 23:49:32.593231317 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:32.656590 containerd[1530]: 2025-09-12 23:49:32.593 [INFO][3818] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:32.656590 containerd[1530]: 2025-09-12 23:49:32.593 [INFO][3818] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:32.656590 containerd[1530]: 2025-09-12 23:49:32.593 [INFO][3818] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:32.656590 containerd[1530]: 2025-09-12 23:49:32.604 [INFO][3818] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" host="localhost" Sep 12 23:49:32.656590 containerd[1530]: 2025-09-12 23:49:32.609 [INFO][3818] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:32.656590 containerd[1530]: 2025-09-12 23:49:32.613 [INFO][3818] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:32.656590 containerd[1530]: 2025-09-12 23:49:32.615 [INFO][3818] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:32.656590 containerd[1530]: 2025-09-12 23:49:32.618 [INFO][3818] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:32.656590 containerd[1530]: 2025-09-12 23:49:32.618 [INFO][3818] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" host="localhost" Sep 12 23:49:32.656790 containerd[1530]: 2025-09-12 23:49:32.619 [INFO][3818] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a Sep 12 23:49:32.656790 containerd[1530]: 2025-09-12 23:49:32.623 [INFO][3818] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" host="localhost" Sep 12 23:49:32.656790 containerd[1530]: 2025-09-12 23:49:32.630 [INFO][3818] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" host="localhost" Sep 12 23:49:32.656790 containerd[1530]: 2025-09-12 23:49:32.630 [INFO][3818] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" host="localhost" Sep 12 23:49:32.656790 containerd[1530]: 2025-09-12 23:49:32.630 [INFO][3818] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:32.656790 containerd[1530]: 2025-09-12 23:49:32.630 [INFO][3818] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" HandleID="k8s-pod-network.8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" Workload="localhost-k8s-whisker--6bcbff775f--lbxp5-eth0" Sep 12 23:49:32.656941 containerd[1530]: 2025-09-12 23:49:32.632 [INFO][3804] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" Namespace="calico-system" Pod="whisker-6bcbff775f-lbxp5" WorkloadEndpoint="localhost-k8s-whisker--6bcbff775f--lbxp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bcbff775f--lbxp5-eth0", GenerateName:"whisker-6bcbff775f-", Namespace:"calico-system", SelfLink:"", UID:"34dc20d8-df49-41ec-912c-7164fea7620d", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bcbff775f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-6bcbff775f-lbxp5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califb64ce1d6d3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:32.656941 containerd[1530]: 2025-09-12 23:49:32.632 [INFO][3804] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" Namespace="calico-system" Pod="whisker-6bcbff775f-lbxp5" WorkloadEndpoint="localhost-k8s-whisker--6bcbff775f--lbxp5-eth0" Sep 12 23:49:32.657007 containerd[1530]: 2025-09-12 23:49:32.633 [INFO][3804] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb64ce1d6d3 ContainerID="8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" Namespace="calico-system" Pod="whisker-6bcbff775f-lbxp5" WorkloadEndpoint="localhost-k8s-whisker--6bcbff775f--lbxp5-eth0" Sep 12 23:49:32.657007 containerd[1530]: 2025-09-12 23:49:32.641 [INFO][3804] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" Namespace="calico-system" Pod="whisker-6bcbff775f-lbxp5" WorkloadEndpoint="localhost-k8s-whisker--6bcbff775f--lbxp5-eth0" Sep 12 23:49:32.657047 containerd[1530]: 2025-09-12 23:49:32.642 [INFO][3804] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" Namespace="calico-system" Pod="whisker-6bcbff775f-lbxp5" WorkloadEndpoint="localhost-k8s-whisker--6bcbff775f--lbxp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--6bcbff775f--lbxp5-eth0", GenerateName:"whisker-6bcbff775f-", Namespace:"calico-system", SelfLink:"", UID:"34dc20d8-df49-41ec-912c-7164fea7620d", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6bcbff775f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a", Pod:"whisker-6bcbff775f-lbxp5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califb64ce1d6d3", MAC:"a2:78:7e:2f:18:f5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:32.657090 containerd[1530]: 2025-09-12 23:49:32.654 [INFO][3804] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" Namespace="calico-system" Pod="whisker-6bcbff775f-lbxp5" WorkloadEndpoint="localhost-k8s-whisker--6bcbff775f--lbxp5-eth0" Sep 12 23:49:32.764165 containerd[1530]: time="2025-09-12T23:49:32.764100762Z" level=info msg="connecting to shim 8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a" address="unix:///run/containerd/s/3521fc31c46ce26d6c661e988fec37f14aa963ab764656de1edc869ec87c4bdb" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:32.795347 systemd[1]: Started cri-containerd-8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a.scope - libcontainer container 8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a. Sep 12 23:49:32.807756 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:32.906651 containerd[1530]: time="2025-09-12T23:49:32.906609000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bcbff775f-lbxp5,Uid:34dc20d8-df49-41ec-912c-7164fea7620d,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a\"" Sep 12 23:49:32.908782 containerd[1530]: time="2025-09-12T23:49:32.908744120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 23:49:33.080469 kubelet[2670]: I0912 23:49:33.080374 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:33.763854 containerd[1530]: time="2025-09-12T23:49:33.763785133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:33.764331 containerd[1530]: time="2025-09-12T23:49:33.764299973Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 12 23:49:33.765077 containerd[1530]: time="2025-09-12T23:49:33.765052294Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:33.766850 containerd[1530]: time="2025-09-12T23:49:33.766825294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:33.768181 containerd[1530]: time="2025-09-12T23:49:33.768079414Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 859.297054ms" Sep 12 23:49:33.768181 containerd[1530]: time="2025-09-12T23:49:33.768111294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 12 23:49:33.772163 containerd[1530]: time="2025-09-12T23:49:33.771795095Z" level=info msg="CreateContainer within sandbox \"8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 23:49:33.778120 containerd[1530]: time="2025-09-12T23:49:33.778080177Z" level=info msg="Container 0065397814d17e0a90d657c321363429921033dc3728497999cb5e254353d139: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:33.786181 containerd[1530]: time="2025-09-12T23:49:33.786127819Z" level=info msg="CreateContainer within sandbox \"8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0065397814d17e0a90d657c321363429921033dc3728497999cb5e254353d139\"" Sep 12 23:49:33.786613 containerd[1530]: time="2025-09-12T23:49:33.786583179Z" level=info msg="StartContainer for \"0065397814d17e0a90d657c321363429921033dc3728497999cb5e254353d139\"" Sep 12 23:49:33.787927 containerd[1530]: time="2025-09-12T23:49:33.787894579Z" level=info msg="connecting to shim 0065397814d17e0a90d657c321363429921033dc3728497999cb5e254353d139" address="unix:///run/containerd/s/3521fc31c46ce26d6c661e988fec37f14aa963ab764656de1edc869ec87c4bdb" protocol=ttrpc version=3 Sep 12 23:49:33.808314 systemd[1]: Started cri-containerd-0065397814d17e0a90d657c321363429921033dc3728497999cb5e254353d139.scope - libcontainer container 0065397814d17e0a90d657c321363429921033dc3728497999cb5e254353d139. Sep 12 23:49:33.849608 containerd[1530]: time="2025-09-12T23:49:33.849574715Z" level=info msg="StartContainer for \"0065397814d17e0a90d657c321363429921033dc3728497999cb5e254353d139\" returns successfully" Sep 12 23:49:33.850938 containerd[1530]: time="2025-09-12T23:49:33.850918955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 23:49:33.978461 kubelet[2670]: I0912 23:49:33.978410 2670 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="02f97b23-2b5e-4cdd-a600-f252898d3452" path="/var/lib/kubelet/pods/02f97b23-2b5e-4cdd-a600-f252898d3452/volumes" Sep 12 23:49:34.164296 systemd-networkd[1430]: califb64ce1d6d3: Gained IPv6LL Sep 12 23:49:35.231366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1501468491.mount: Deactivated successfully. Sep 12 23:49:35.291111 containerd[1530]: time="2025-09-12T23:49:35.291064127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:35.291984 containerd[1530]: time="2025-09-12T23:49:35.291830607Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 12 23:49:35.292808 containerd[1530]: time="2025-09-12T23:49:35.292776368Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:35.295194 containerd[1530]: time="2025-09-12T23:49:35.295133688Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:35.296061 containerd[1530]: time="2025-09-12T23:49:35.295768328Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.444762373s" Sep 12 23:49:35.296061 containerd[1530]: time="2025-09-12T23:49:35.295806608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 12 23:49:35.308169 containerd[1530]: time="2025-09-12T23:49:35.308106171Z" level=info msg="CreateContainer within sandbox \"8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 23:49:35.316176 containerd[1530]: time="2025-09-12T23:49:35.315548932Z" level=info msg="Container 4d862fb0ab2d92d82d3bc1bbf25bec28212b8a8d26b265438b7f1dc79082756d: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:35.325586 containerd[1530]: time="2025-09-12T23:49:35.325550015Z" level=info msg="CreateContainer within sandbox \"8c396ed70c67ddf37f1d845a0af95c10c4134ed3dd6937c7e02603f774e7653a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4d862fb0ab2d92d82d3bc1bbf25bec28212b8a8d26b265438b7f1dc79082756d\"" Sep 12 23:49:35.326848 containerd[1530]: time="2025-09-12T23:49:35.326824935Z" level=info msg="StartContainer for \"4d862fb0ab2d92d82d3bc1bbf25bec28212b8a8d26b265438b7f1dc79082756d\"" Sep 12 23:49:35.328332 containerd[1530]: time="2025-09-12T23:49:35.328292815Z" level=info msg="connecting to shim 4d862fb0ab2d92d82d3bc1bbf25bec28212b8a8d26b265438b7f1dc79082756d" address="unix:///run/containerd/s/3521fc31c46ce26d6c661e988fec37f14aa963ab764656de1edc869ec87c4bdb" protocol=ttrpc version=3 Sep 12 23:49:35.355365 systemd[1]: Started cri-containerd-4d862fb0ab2d92d82d3bc1bbf25bec28212b8a8d26b265438b7f1dc79082756d.scope - libcontainer container 4d862fb0ab2d92d82d3bc1bbf25bec28212b8a8d26b265438b7f1dc79082756d. Sep 12 23:49:35.413297 containerd[1530]: time="2025-09-12T23:49:35.413220234Z" level=info msg="StartContainer for \"4d862fb0ab2d92d82d3bc1bbf25bec28212b8a8d26b265438b7f1dc79082756d\" returns successfully" Sep 12 23:49:36.161396 kubelet[2670]: I0912 23:49:36.161325 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-6bcbff775f-lbxp5" podStartSLOduration=1.7731104260000001 podStartE2EDuration="4.161216314s" podCreationTimestamp="2025-09-12 23:49:32 +0000 UTC" firstStartedPulling="2025-09-12 23:49:32.90854392 +0000 UTC m=+33.031858608" lastFinishedPulling="2025-09-12 23:49:35.296649808 +0000 UTC m=+35.419964496" observedRunningTime="2025-09-12 23:49:36.161077114 +0000 UTC m=+36.284391802" watchObservedRunningTime="2025-09-12 23:49:36.161216314 +0000 UTC m=+36.284531002" Sep 12 23:49:38.975216 containerd[1530]: time="2025-09-12T23:49:38.975172011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f67f4748b-597zd,Uid:e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:38.975216 containerd[1530]: time="2025-09-12T23:49:38.975172051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jbcmc,Uid:ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:38.976086 containerd[1530]: time="2025-09-12T23:49:38.975989651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rvcjx,Uid:444f5932-6f41-49ae-b1ad-2d15103b287b,Namespace:kube-system,Attempt:0,}" Sep 12 23:49:39.127356 systemd-networkd[1430]: cali6a2fe8c433a: Link UP Sep 12 23:49:39.128448 systemd-networkd[1430]: cali6a2fe8c433a: Gained carrier Sep 12 23:49:39.151980 containerd[1530]: 2025-09-12 23:49:39.022 [INFO][4191] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:39.151980 containerd[1530]: 2025-09-12 23:49:39.041 [INFO][4191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0 calico-kube-controllers-6f67f4748b- calico-system e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e 836 0 2025-09-12 23:49:21 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f67f4748b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6f67f4748b-597zd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali6a2fe8c433a [] [] }} ContainerID="471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" Namespace="calico-system" Pod="calico-kube-controllers-6f67f4748b-597zd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-" Sep 12 23:49:39.151980 containerd[1530]: 2025-09-12 23:49:39.043 [INFO][4191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" Namespace="calico-system" Pod="calico-kube-controllers-6f67f4748b-597zd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0" Sep 12 23:49:39.151980 containerd[1530]: 2025-09-12 23:49:39.082 [INFO][4232] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" HandleID="k8s-pod-network.471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" Workload="localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0" Sep 12 23:49:39.152214 containerd[1530]: 2025-09-12 23:49:39.083 [INFO][4232] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" HandleID="k8s-pod-network.471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" Workload="localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121390), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6f67f4748b-597zd", "timestamp":"2025-09-12 23:49:39.08292871 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:39.152214 containerd[1530]: 2025-09-12 23:49:39.083 [INFO][4232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:39.152214 containerd[1530]: 2025-09-12 23:49:39.083 [INFO][4232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:39.152214 containerd[1530]: 2025-09-12 23:49:39.083 [INFO][4232] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:39.152214 containerd[1530]: 2025-09-12 23:49:39.094 [INFO][4232] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" host="localhost" Sep 12 23:49:39.152214 containerd[1530]: 2025-09-12 23:49:39.098 [INFO][4232] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:39.152214 containerd[1530]: 2025-09-12 23:49:39.106 [INFO][4232] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:39.152214 containerd[1530]: 2025-09-12 23:49:39.109 [INFO][4232] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:39.152214 containerd[1530]: 2025-09-12 23:49:39.111 [INFO][4232] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:39.152214 containerd[1530]: 2025-09-12 23:49:39.112 [INFO][4232] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" host="localhost" Sep 12 23:49:39.152427 containerd[1530]: 2025-09-12 23:49:39.113 [INFO][4232] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c Sep 12 23:49:39.152427 containerd[1530]: 2025-09-12 23:49:39.117 [INFO][4232] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" host="localhost" Sep 12 23:49:39.152427 containerd[1530]: 2025-09-12 23:49:39.122 [INFO][4232] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" host="localhost" Sep 12 23:49:39.152427 containerd[1530]: 2025-09-12 23:49:39.122 [INFO][4232] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" host="localhost" Sep 12 23:49:39.152427 containerd[1530]: 2025-09-12 23:49:39.122 [INFO][4232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:39.152427 containerd[1530]: 2025-09-12 23:49:39.123 [INFO][4232] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" HandleID="k8s-pod-network.471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" Workload="localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0" Sep 12 23:49:39.152543 containerd[1530]: 2025-09-12 23:49:39.125 [INFO][4191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" Namespace="calico-system" Pod="calico-kube-controllers-6f67f4748b-597zd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0", GenerateName:"calico-kube-controllers-6f67f4748b-", Namespace:"calico-system", SelfLink:"", UID:"e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f67f4748b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6f67f4748b-597zd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a2fe8c433a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:39.152600 containerd[1530]: 2025-09-12 23:49:39.125 [INFO][4191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" Namespace="calico-system" Pod="calico-kube-controllers-6f67f4748b-597zd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0" Sep 12 23:49:39.152600 containerd[1530]: 2025-09-12 23:49:39.125 [INFO][4191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a2fe8c433a ContainerID="471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" Namespace="calico-system" Pod="calico-kube-controllers-6f67f4748b-597zd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0" Sep 12 23:49:39.152600 containerd[1530]: 2025-09-12 23:49:39.129 [INFO][4191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" Namespace="calico-system" Pod="calico-kube-controllers-6f67f4748b-597zd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0" Sep 12 23:49:39.152660 containerd[1530]: 2025-09-12 23:49:39.129 [INFO][4191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" Namespace="calico-system" Pod="calico-kube-controllers-6f67f4748b-597zd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0", GenerateName:"calico-kube-controllers-6f67f4748b-", Namespace:"calico-system", SelfLink:"", UID:"e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e", ResourceVersion:"836", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f67f4748b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c", Pod:"calico-kube-controllers-6f67f4748b-597zd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali6a2fe8c433a", MAC:"62:52:95:ef:7b:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:39.152705 containerd[1530]: 2025-09-12 23:49:39.149 [INFO][4191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" Namespace="calico-system" Pod="calico-kube-controllers-6f67f4748b-597zd" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6f67f4748b--597zd-eth0" Sep 12 23:49:39.282938 systemd-networkd[1430]: califa177afeb8a: Link UP Sep 12 23:49:39.285713 systemd-networkd[1430]: califa177afeb8a: Gained carrier Sep 12 23:49:39.291120 containerd[1530]: time="2025-09-12T23:49:39.291005625Z" level=info msg="connecting to shim 471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c" address="unix:///run/containerd/s/27a0fe745c41d2e61e0c18ec5965fded03e009dbe1d0a7b31827ef4b8514b93f" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:39.311080 containerd[1530]: 2025-09-12 23:49:39.036 [INFO][4216] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:39.311080 containerd[1530]: 2025-09-12 23:49:39.055 [INFO][4216] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0 coredns-674b8bbfcf- kube-system 444f5932-6f41-49ae-b1ad-2d15103b287b 838 0 2025-09-12 23:49:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-rvcjx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califa177afeb8a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvcjx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rvcjx-" Sep 12 23:49:39.311080 containerd[1530]: 2025-09-12 23:49:39.055 [INFO][4216] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvcjx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0" Sep 12 23:49:39.311080 containerd[1530]: 2025-09-12 23:49:39.083 [INFO][4247] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" HandleID="k8s-pod-network.327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" Workload="localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0" Sep 12 23:49:39.311598 containerd[1530]: 2025-09-12 23:49:39.083 [INFO][4247] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" HandleID="k8s-pod-network.327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" Workload="localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400050ac40), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-rvcjx", "timestamp":"2025-09-12 23:49:39.08297927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:39.311598 containerd[1530]: 2025-09-12 23:49:39.083 [INFO][4247] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:39.311598 containerd[1530]: 2025-09-12 23:49:39.123 [INFO][4247] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:39.311598 containerd[1530]: 2025-09-12 23:49:39.123 [INFO][4247] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:39.311598 containerd[1530]: 2025-09-12 23:49:39.206 [INFO][4247] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" host="localhost" Sep 12 23:49:39.311598 containerd[1530]: 2025-09-12 23:49:39.220 [INFO][4247] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:39.311598 containerd[1530]: 2025-09-12 23:49:39.234 [INFO][4247] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:39.311598 containerd[1530]: 2025-09-12 23:49:39.239 [INFO][4247] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:39.311598 containerd[1530]: 2025-09-12 23:49:39.246 [INFO][4247] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:39.311598 containerd[1530]: 2025-09-12 23:49:39.247 [INFO][4247] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" host="localhost" Sep 12 23:49:39.311973 containerd[1530]: 2025-09-12 23:49:39.253 [INFO][4247] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c Sep 12 23:49:39.311973 containerd[1530]: 2025-09-12 23:49:39.261 [INFO][4247] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" host="localhost" Sep 12 23:49:39.311973 containerd[1530]: 2025-09-12 23:49:39.269 [INFO][4247] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" host="localhost" Sep 12 23:49:39.311973 containerd[1530]: 2025-09-12 23:49:39.269 [INFO][4247] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" host="localhost" Sep 12 23:49:39.311973 containerd[1530]: 2025-09-12 23:49:39.269 [INFO][4247] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:39.311973 containerd[1530]: 2025-09-12 23:49:39.269 [INFO][4247] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" HandleID="k8s-pod-network.327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" Workload="localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0" Sep 12 23:49:39.312173 containerd[1530]: 2025-09-12 23:49:39.280 [INFO][4216] cni-plugin/k8s.go 418: Populated endpoint ContainerID="327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvcjx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"444f5932-6f41-49ae-b1ad-2d15103b287b", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-rvcjx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa177afeb8a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:39.312259 containerd[1530]: 2025-09-12 23:49:39.280 [INFO][4216] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvcjx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0" Sep 12 23:49:39.312259 containerd[1530]: 2025-09-12 23:49:39.280 [INFO][4216] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califa177afeb8a ContainerID="327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvcjx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0" Sep 12 23:49:39.312259 containerd[1530]: 2025-09-12 23:49:39.288 [INFO][4216] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvcjx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0" Sep 12 23:49:39.312761 containerd[1530]: 2025-09-12 23:49:39.291 [INFO][4216] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvcjx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"444f5932-6f41-49ae-b1ad-2d15103b287b", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c", Pod:"coredns-674b8bbfcf-rvcjx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califa177afeb8a", MAC:"0a:d9:51:68:26:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:39.312761 containerd[1530]: 2025-09-12 23:49:39.306 [INFO][4216] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvcjx" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--rvcjx-eth0" Sep 12 23:49:39.329705 systemd[1]: Started cri-containerd-471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c.scope - libcontainer container 471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c. Sep 12 23:49:39.355841 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:39.362643 containerd[1530]: time="2025-09-12T23:49:39.362291237Z" level=info msg="connecting to shim 327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c" address="unix:///run/containerd/s/95fb416f524ef35ebc6de9512b8a477528b14906beee061965a956939829234c" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:39.366261 systemd-networkd[1430]: cali88905fcfc1b: Link UP Sep 12 23:49:39.366468 systemd-networkd[1430]: cali88905fcfc1b: Gained carrier Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.030 [INFO][4204] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.051 [INFO][4204] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--jbcmc-eth0 goldmane-54d579b49d- calico-system ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a 840 0 2025-09-12 23:49:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-jbcmc eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali88905fcfc1b [] [] }} ContainerID="7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" Namespace="calico-system" Pod="goldmane-54d579b49d-jbcmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jbcmc-" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.051 [INFO][4204] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" Namespace="calico-system" Pod="goldmane-54d579b49d-jbcmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jbcmc-eth0" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.086 [INFO][4241] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" HandleID="k8s-pod-network.7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" Workload="localhost-k8s-goldmane--54d579b49d--jbcmc-eth0" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.086 [INFO][4241] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" HandleID="k8s-pod-network.7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" Workload="localhost-k8s-goldmane--54d579b49d--jbcmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000117540), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-jbcmc", "timestamp":"2025-09-12 23:49:39.08600455 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.086 [INFO][4241] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.269 [INFO][4241] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.269 [INFO][4241] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.295 [INFO][4241] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" host="localhost" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.322 [INFO][4241] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.334 [INFO][4241] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.337 [INFO][4241] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.340 [INFO][4241] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.340 [INFO][4241] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" host="localhost" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.343 [INFO][4241] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79 Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.349 [INFO][4241] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" host="localhost" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.358 [INFO][4241] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" host="localhost" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.358 [INFO][4241] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" host="localhost" Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.358 [INFO][4241] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:39.385456 containerd[1530]: 2025-09-12 23:49:39.358 [INFO][4241] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" HandleID="k8s-pod-network.7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" Workload="localhost-k8s-goldmane--54d579b49d--jbcmc-eth0" Sep 12 23:49:39.386049 containerd[1530]: 2025-09-12 23:49:39.363 [INFO][4204] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" Namespace="calico-system" Pod="goldmane-54d579b49d-jbcmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jbcmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--jbcmc-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-jbcmc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali88905fcfc1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:39.386049 containerd[1530]: 2025-09-12 23:49:39.363 [INFO][4204] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" Namespace="calico-system" Pod="goldmane-54d579b49d-jbcmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jbcmc-eth0" Sep 12 23:49:39.386049 containerd[1530]: 2025-09-12 23:49:39.363 [INFO][4204] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali88905fcfc1b ContainerID="7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" Namespace="calico-system" Pod="goldmane-54d579b49d-jbcmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jbcmc-eth0" Sep 12 23:49:39.386049 containerd[1530]: 2025-09-12 23:49:39.366 [INFO][4204] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" Namespace="calico-system" Pod="goldmane-54d579b49d-jbcmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jbcmc-eth0" Sep 12 23:49:39.386049 containerd[1530]: 2025-09-12 23:49:39.368 [INFO][4204] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" Namespace="calico-system" Pod="goldmane-54d579b49d-jbcmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jbcmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--jbcmc-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79", Pod:"goldmane-54d579b49d-jbcmc", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali88905fcfc1b", MAC:"c2:18:40:58:08:0d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:39.386049 containerd[1530]: 2025-09-12 23:49:39.381 [INFO][4204] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" Namespace="calico-system" Pod="goldmane-54d579b49d-jbcmc" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--jbcmc-eth0" Sep 12 23:49:39.398563 systemd[1]: Started cri-containerd-327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c.scope - libcontainer container 327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c. Sep 12 23:49:39.405091 containerd[1530]: time="2025-09-12T23:49:39.405038364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f67f4748b-597zd,Uid:e0ae7764-8dc3-457b-a3e1-0c3d7d7f329e,Namespace:calico-system,Attempt:0,} returns sandbox id \"471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c\"" Sep 12 23:49:39.408203 containerd[1530]: time="2025-09-12T23:49:39.407091724Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 23:49:39.418051 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:39.427398 containerd[1530]: time="2025-09-12T23:49:39.427349408Z" level=info msg="connecting to shim 7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79" address="unix:///run/containerd/s/5a47f15e20324d8a0ae62e51b2e39bab30397101ee2a189f9d15aa634e80ab0e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:39.473373 systemd[1]: Started cri-containerd-7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79.scope - libcontainer container 7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79. Sep 12 23:49:39.481198 containerd[1530]: time="2025-09-12T23:49:39.481120617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rvcjx,Uid:444f5932-6f41-49ae-b1ad-2d15103b287b,Namespace:kube-system,Attempt:0,} returns sandbox id \"327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c\"" Sep 12 23:49:39.496559 containerd[1530]: time="2025-09-12T23:49:39.494471299Z" level=info msg="CreateContainer within sandbox \"327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:49:39.505627 containerd[1530]: time="2025-09-12T23:49:39.505518981Z" level=info msg="Container ef5b7656176a758e9e9e20dcc74c44693d3ffabddd1c1cfb03ba60716ff7c06e: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:39.505994 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:39.513751 containerd[1530]: time="2025-09-12T23:49:39.513703422Z" level=info msg="CreateContainer within sandbox \"327b2dda43674f3a14d1d7048e17a42da2c8a31c91a97d6feef927ad1227792c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ef5b7656176a758e9e9e20dcc74c44693d3ffabddd1c1cfb03ba60716ff7c06e\"" Sep 12 23:49:39.514977 containerd[1530]: time="2025-09-12T23:49:39.514947542Z" level=info msg="StartContainer for \"ef5b7656176a758e9e9e20dcc74c44693d3ffabddd1c1cfb03ba60716ff7c06e\"" Sep 12 23:49:39.516529 containerd[1530]: time="2025-09-12T23:49:39.516445223Z" level=info msg="connecting to shim ef5b7656176a758e9e9e20dcc74c44693d3ffabddd1c1cfb03ba60716ff7c06e" address="unix:///run/containerd/s/95fb416f524ef35ebc6de9512b8a477528b14906beee061965a956939829234c" protocol=ttrpc version=3 Sep 12 23:49:39.547736 containerd[1530]: time="2025-09-12T23:49:39.547595388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-jbcmc,Uid:ed6fc0a2-bf30-4b3f-a476-7f220b6cb40a,Namespace:calico-system,Attempt:0,} returns sandbox id \"7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79\"" Sep 12 23:49:39.551251 systemd[1]: Started cri-containerd-ef5b7656176a758e9e9e20dcc74c44693d3ffabddd1c1cfb03ba60716ff7c06e.scope - libcontainer container ef5b7656176a758e9e9e20dcc74c44693d3ffabddd1c1cfb03ba60716ff7c06e. Sep 12 23:49:39.579668 containerd[1530]: time="2025-09-12T23:49:39.579623833Z" level=info msg="StartContainer for \"ef5b7656176a758e9e9e20dcc74c44693d3ffabddd1c1cfb03ba60716ff7c06e\" returns successfully" Sep 12 23:49:40.147455 kubelet[2670]: I0912 23:49:40.147383 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rvcjx" podStartSLOduration=34.147330287 podStartE2EDuration="34.147330287s" podCreationTimestamp="2025-09-12 23:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:49:40.145803407 +0000 UTC m=+40.269118055" watchObservedRunningTime="2025-09-12 23:49:40.147330287 +0000 UTC m=+40.270644935" Sep 12 23:49:40.181325 systemd-networkd[1430]: cali6a2fe8c433a: Gained IPv6LL Sep 12 23:49:40.500342 systemd-networkd[1430]: cali88905fcfc1b: Gained IPv6LL Sep 12 23:49:40.564494 systemd-networkd[1430]: califa177afeb8a: Gained IPv6LL Sep 12 23:49:40.967179 containerd[1530]: time="2025-09-12T23:49:40.967057936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:40.967982 containerd[1530]: time="2025-09-12T23:49:40.967941696Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 12 23:49:40.968843 containerd[1530]: time="2025-09-12T23:49:40.968817936Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:40.971090 containerd[1530]: time="2025-09-12T23:49:40.971054337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:40.971983 containerd[1530]: time="2025-09-12T23:49:40.971935337Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 1.563732773s" Sep 12 23:49:40.971983 containerd[1530]: time="2025-09-12T23:49:40.971970497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 12 23:49:40.973118 containerd[1530]: time="2025-09-12T23:49:40.973099377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 23:49:40.974855 containerd[1530]: time="2025-09-12T23:49:40.974823857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4ljq7,Uid:fe0b7f3e-00e2-40e2-8217-c68c41916a8a,Namespace:kube-system,Attempt:0,}" Sep 12 23:49:40.981521 containerd[1530]: time="2025-09-12T23:49:40.981488218Z" level=info msg="CreateContainer within sandbox \"471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 23:49:40.997644 containerd[1530]: time="2025-09-12T23:49:40.997526021Z" level=info msg="Container d1c2c93620dfe357245935077ac280d4a4fcfd64463fe3a88ff7e6b364d589b1: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:41.016712 containerd[1530]: time="2025-09-12T23:49:41.016619144Z" level=info msg="CreateContainer within sandbox \"471f91ec84799fe0e9c707464b243ce8bd502bd546c63f04e09d027a6409678c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d1c2c93620dfe357245935077ac280d4a4fcfd64463fe3a88ff7e6b364d589b1\"" Sep 12 23:49:41.017424 containerd[1530]: time="2025-09-12T23:49:41.017394024Z" level=info msg="StartContainer for \"d1c2c93620dfe357245935077ac280d4a4fcfd64463fe3a88ff7e6b364d589b1\"" Sep 12 23:49:41.019277 containerd[1530]: time="2025-09-12T23:49:41.019247424Z" level=info msg="connecting to shim d1c2c93620dfe357245935077ac280d4a4fcfd64463fe3a88ff7e6b364d589b1" address="unix:///run/containerd/s/27a0fe745c41d2e61e0c18ec5965fded03e009dbe1d0a7b31827ef4b8514b93f" protocol=ttrpc version=3 Sep 12 23:49:41.043352 systemd[1]: Started cri-containerd-d1c2c93620dfe357245935077ac280d4a4fcfd64463fe3a88ff7e6b364d589b1.scope - libcontainer container d1c2c93620dfe357245935077ac280d4a4fcfd64463fe3a88ff7e6b364d589b1. Sep 12 23:49:41.095189 containerd[1530]: time="2025-09-12T23:49:41.095095595Z" level=info msg="StartContainer for \"d1c2c93620dfe357245935077ac280d4a4fcfd64463fe3a88ff7e6b364d589b1\" returns successfully" Sep 12 23:49:41.107112 systemd-networkd[1430]: caliebbda9d8ce5: Link UP Sep 12 23:49:41.107817 systemd-networkd[1430]: caliebbda9d8ce5: Gained carrier Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.002 [INFO][4516] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.025 [INFO][4516] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0 coredns-674b8bbfcf- kube-system fe0b7f3e-00e2-40e2-8217-c68c41916a8a 832 0 2025-09-12 23:49:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-4ljq7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliebbda9d8ce5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" Namespace="kube-system" Pod="coredns-674b8bbfcf-4ljq7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4ljq7-" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.026 [INFO][4516] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" Namespace="kube-system" Pod="coredns-674b8bbfcf-4ljq7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.052 [INFO][4544] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" HandleID="k8s-pod-network.8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" Workload="localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.052 [INFO][4544] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" HandleID="k8s-pod-network.8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" Workload="localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-4ljq7", "timestamp":"2025-09-12 23:49:41.052473029 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.052 [INFO][4544] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.052 [INFO][4544] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.052 [INFO][4544] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.064 [INFO][4544] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" host="localhost" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.071 [INFO][4544] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.078 [INFO][4544] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.080 [INFO][4544] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.085 [INFO][4544] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.085 [INFO][4544] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" host="localhost" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.087 [INFO][4544] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78 Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.093 [INFO][4544] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" host="localhost" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.101 [INFO][4544] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" host="localhost" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.102 [INFO][4544] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" host="localhost" Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.102 [INFO][4544] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:41.123336 containerd[1530]: 2025-09-12 23:49:41.102 [INFO][4544] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" HandleID="k8s-pod-network.8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" Workload="localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0" Sep 12 23:49:41.123846 containerd[1530]: 2025-09-12 23:49:41.104 [INFO][4516] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" Namespace="kube-system" Pod="coredns-674b8bbfcf-4ljq7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe0b7f3e-00e2-40e2-8217-c68c41916a8a", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-4ljq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebbda9d8ce5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:41.123846 containerd[1530]: 2025-09-12 23:49:41.104 [INFO][4516] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" Namespace="kube-system" Pod="coredns-674b8bbfcf-4ljq7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0" Sep 12 23:49:41.123846 containerd[1530]: 2025-09-12 23:49:41.104 [INFO][4516] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliebbda9d8ce5 ContainerID="8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" Namespace="kube-system" Pod="coredns-674b8bbfcf-4ljq7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0" Sep 12 23:49:41.123846 containerd[1530]: 2025-09-12 23:49:41.107 [INFO][4516] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" Namespace="kube-system" Pod="coredns-674b8bbfcf-4ljq7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0" Sep 12 23:49:41.123846 containerd[1530]: 2025-09-12 23:49:41.110 [INFO][4516] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" Namespace="kube-system" Pod="coredns-674b8bbfcf-4ljq7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"fe0b7f3e-00e2-40e2-8217-c68c41916a8a", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78", Pod:"coredns-674b8bbfcf-4ljq7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliebbda9d8ce5", MAC:"1e:db:be:34:f1:f9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:41.123846 containerd[1530]: 2025-09-12 23:49:41.121 [INFO][4516] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" Namespace="kube-system" Pod="coredns-674b8bbfcf-4ljq7" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--4ljq7-eth0" Sep 12 23:49:41.139471 kubelet[2670]: I0912 23:49:41.139345 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6f67f4748b-597zd" podStartSLOduration=18.572870509 podStartE2EDuration="20.139324402s" podCreationTimestamp="2025-09-12 23:49:21 +0000 UTC" firstStartedPulling="2025-09-12 23:49:39.406475804 +0000 UTC m=+39.529790452" lastFinishedPulling="2025-09-12 23:49:40.972929657 +0000 UTC m=+41.096244345" observedRunningTime="2025-09-12 23:49:41.138305482 +0000 UTC m=+41.261620170" watchObservedRunningTime="2025-09-12 23:49:41.139324402 +0000 UTC m=+41.262639090" Sep 12 23:49:41.155945 containerd[1530]: time="2025-09-12T23:49:41.155898484Z" level=info msg="connecting to shim 8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78" address="unix:///run/containerd/s/e55163c413470e1dd22f6cfd606e5d638a8a9a28a39cf5734a7652e38f517108" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:41.189341 systemd[1]: Started cri-containerd-8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78.scope - libcontainer container 8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78. Sep 12 23:49:41.204163 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:41.289722 containerd[1530]: time="2025-09-12T23:49:41.289662504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-4ljq7,Uid:fe0b7f3e-00e2-40e2-8217-c68c41916a8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78\"" Sep 12 23:49:41.296480 containerd[1530]: time="2025-09-12T23:49:41.296444225Z" level=info msg="CreateContainer within sandbox \"8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 23:49:41.313364 containerd[1530]: time="2025-09-12T23:49:41.313319948Z" level=info msg="Container c170470049447b60059de6d797b8547b925b06bc92d78390957f2370dd7c5460: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:41.318898 containerd[1530]: time="2025-09-12T23:49:41.318862828Z" level=info msg="CreateContainer within sandbox \"8d1ed59fb256ff129316b857ffe3b6edb2d7ced4224d266a442f4110ac868c78\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c170470049447b60059de6d797b8547b925b06bc92d78390957f2370dd7c5460\"" Sep 12 23:49:41.320678 containerd[1530]: time="2025-09-12T23:49:41.320652269Z" level=info msg="StartContainer for \"c170470049447b60059de6d797b8547b925b06bc92d78390957f2370dd7c5460\"" Sep 12 23:49:41.321727 containerd[1530]: time="2025-09-12T23:49:41.321703909Z" level=info msg="connecting to shim c170470049447b60059de6d797b8547b925b06bc92d78390957f2370dd7c5460" address="unix:///run/containerd/s/e55163c413470e1dd22f6cfd606e5d638a8a9a28a39cf5734a7652e38f517108" protocol=ttrpc version=3 Sep 12 23:49:41.342410 systemd[1]: Started cri-containerd-c170470049447b60059de6d797b8547b925b06bc92d78390957f2370dd7c5460.scope - libcontainer container c170470049447b60059de6d797b8547b925b06bc92d78390957f2370dd7c5460. Sep 12 23:49:41.367575 containerd[1530]: time="2025-09-12T23:49:41.367540316Z" level=info msg="StartContainer for \"c170470049447b60059de6d797b8547b925b06bc92d78390957f2370dd7c5460\" returns successfully" Sep 12 23:49:41.980003 containerd[1530]: time="2025-09-12T23:49:41.979946446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6grrt,Uid:4b1d8eb9-d498-4006-84a4-c9d3a374aa3e,Namespace:calico-system,Attempt:0,}" Sep 12 23:49:42.261010 containerd[1530]: time="2025-09-12T23:49:42.260972285Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1c2c93620dfe357245935077ac280d4a4fcfd64463fe3a88ff7e6b364d589b1\" id:\"9d858c21d346b1e5d17197e4f7a6419121347cb436055cf301dae0be7abfc003\" pid:4739 exited_at:{seconds:1757720982 nanos:259306885}" Sep 12 23:49:42.263379 systemd-networkd[1430]: calid5de2fa5c43: Link UP Sep 12 23:49:42.263606 systemd-networkd[1430]: calid5de2fa5c43: Gained carrier Sep 12 23:49:42.287532 kubelet[2670]: I0912 23:49:42.287463 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-4ljq7" podStartSLOduration=36.287446369 podStartE2EDuration="36.287446369s" podCreationTimestamp="2025-09-12 23:49:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 23:49:42.143859029 +0000 UTC m=+42.267173677" watchObservedRunningTime="2025-09-12 23:49:42.287446369 +0000 UTC m=+42.410761057" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.022 [INFO][4696] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.047 [INFO][4696] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--6grrt-eth0 csi-node-driver- calico-system 4b1d8eb9-d498-4006-84a4-c9d3a374aa3e 741 0 2025-09-12 23:49:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-6grrt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid5de2fa5c43 [] [] }} ContainerID="999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" Namespace="calico-system" Pod="csi-node-driver-6grrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--6grrt-" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.047 [INFO][4696] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" Namespace="calico-system" Pod="csi-node-driver-6grrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--6grrt-eth0" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.161 [INFO][4709] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" HandleID="k8s-pod-network.999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" Workload="localhost-k8s-csi--node--driver--6grrt-eth0" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.162 [INFO][4709] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" HandleID="k8s-pod-network.999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" Workload="localhost-k8s-csi--node--driver--6grrt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035cfe0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-6grrt", "timestamp":"2025-09-12 23:49:42.161795911 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.162 [INFO][4709] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.162 [INFO][4709] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.162 [INFO][4709] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.180 [INFO][4709] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" host="localhost" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.190 [INFO][4709] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.196 [INFO][4709] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.199 [INFO][4709] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.204 [INFO][4709] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.204 [INFO][4709] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" host="localhost" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.210 [INFO][4709] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141 Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.227 [INFO][4709] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" host="localhost" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.248 [INFO][4709] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" host="localhost" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.248 [INFO][4709] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" host="localhost" Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.248 [INFO][4709] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:42.289338 containerd[1530]: 2025-09-12 23:49:42.248 [INFO][4709] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" HandleID="k8s-pod-network.999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" Workload="localhost-k8s-csi--node--driver--6grrt-eth0" Sep 12 23:49:42.292250 containerd[1530]: 2025-09-12 23:49:42.255 [INFO][4696] cni-plugin/k8s.go 418: Populated endpoint ContainerID="999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" Namespace="calico-system" Pod="csi-node-driver-6grrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--6grrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6grrt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b1d8eb9-d498-4006-84a4-c9d3a374aa3e", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-6grrt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid5de2fa5c43", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:42.292250 containerd[1530]: 2025-09-12 23:49:42.255 [INFO][4696] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" Namespace="calico-system" Pod="csi-node-driver-6grrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--6grrt-eth0" Sep 12 23:49:42.292250 containerd[1530]: 2025-09-12 23:49:42.256 [INFO][4696] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid5de2fa5c43 ContainerID="999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" Namespace="calico-system" Pod="csi-node-driver-6grrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--6grrt-eth0" Sep 12 23:49:42.292250 containerd[1530]: 2025-09-12 23:49:42.264 [INFO][4696] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" Namespace="calico-system" Pod="csi-node-driver-6grrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--6grrt-eth0" Sep 12 23:49:42.292250 containerd[1530]: 2025-09-12 23:49:42.266 [INFO][4696] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" Namespace="calico-system" Pod="csi-node-driver-6grrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--6grrt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--6grrt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4b1d8eb9-d498-4006-84a4-c9d3a374aa3e", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141", Pod:"csi-node-driver-6grrt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid5de2fa5c43", MAC:"5a:d6:85:cd:cf:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:42.292250 containerd[1530]: 2025-09-12 23:49:42.285 [INFO][4696] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" Namespace="calico-system" Pod="csi-node-driver-6grrt" WorkloadEndpoint="localhost-k8s-csi--node--driver--6grrt-eth0" Sep 12 23:49:42.548288 systemd-networkd[1430]: caliebbda9d8ce5: Gained IPv6LL Sep 12 23:49:42.577467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount256325402.mount: Deactivated successfully. Sep 12 23:49:42.642705 containerd[1530]: time="2025-09-12T23:49:42.642490018Z" level=info msg="connecting to shim 999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141" address="unix:///run/containerd/s/84900631272ae95b0038b4f0500a613256596617651f1f2e704f67c1f94cd15b" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:42.680397 systemd[1]: Started cri-containerd-999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141.scope - libcontainer container 999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141. Sep 12 23:49:42.740479 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:42.780240 containerd[1530]: time="2025-09-12T23:49:42.780120237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6grrt,Uid:4b1d8eb9-d498-4006-84a4-c9d3a374aa3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141\"" Sep 12 23:49:42.898193 kubelet[2670]: I0912 23:49:42.897768 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:42.975089 containerd[1530]: time="2025-09-12T23:49:42.975034424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c649579cc-bn98f,Uid:abe6524f-32ea-4fa9-86c5-c919366ca996,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:49:42.975486 containerd[1530]: time="2025-09-12T23:49:42.975374784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c649579cc-twspx,Uid:d22db48d-3cb6-4db1-8c4d-e3eb64653e01,Namespace:calico-apiserver,Attempt:0,}" Sep 12 23:49:43.093435 systemd[1]: Started sshd@7-10.0.0.101:22-10.0.0.1:41884.service - OpenSSH per-connection server daemon (10.0.0.1:41884). Sep 12 23:49:43.176207 sshd[4876]: Accepted publickey for core from 10.0.0.1 port 41884 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:43.177578 sshd-session[4876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:43.184988 systemd-logind[1508]: New session 8 of user core. Sep 12 23:49:43.195354 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 23:49:43.369721 systemd-networkd[1430]: cali0968460c639: Link UP Sep 12 23:49:43.370021 systemd-networkd[1430]: cali0968460c639: Gained carrier Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.003 [INFO][4831] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.024 [INFO][4831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0 calico-apiserver-6c649579cc- calico-apiserver abe6524f-32ea-4fa9-86c5-c919366ca996 837 0 2025-09-12 23:49:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c649579cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c649579cc-bn98f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0968460c639 [] [] }} ContainerID="dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-bn98f" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--bn98f-" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.024 [INFO][4831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-bn98f" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.077 [INFO][4860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" HandleID="k8s-pod-network.dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" Workload="localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.077 [INFO][4860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" HandleID="k8s-pod-network.dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" Workload="localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c620), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c649579cc-bn98f", "timestamp":"2025-09-12 23:49:43.077468758 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.077 [INFO][4860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.077 [INFO][4860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.077 [INFO][4860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.092 [INFO][4860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" host="localhost" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.101 [INFO][4860] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.106 [INFO][4860] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.111 [INFO][4860] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.114 [INFO][4860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.114 [INFO][4860] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" host="localhost" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.116 [INFO][4860] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820 Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.349 [INFO][4860] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" host="localhost" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.364 [INFO][4860] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" host="localhost" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.364 [INFO][4860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" host="localhost" Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.364 [INFO][4860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:43.398013 containerd[1530]: 2025-09-12 23:49:43.364 [INFO][4860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" HandleID="k8s-pod-network.dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" Workload="localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0" Sep 12 23:49:43.399971 containerd[1530]: 2025-09-12 23:49:43.366 [INFO][4831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-bn98f" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0", GenerateName:"calico-apiserver-6c649579cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"abe6524f-32ea-4fa9-86c5-c919366ca996", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c649579cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c649579cc-bn98f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0968460c639", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:43.399971 containerd[1530]: 2025-09-12 23:49:43.366 [INFO][4831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-bn98f" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0" Sep 12 23:49:43.399971 containerd[1530]: 2025-09-12 23:49:43.366 [INFO][4831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0968460c639 ContainerID="dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-bn98f" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0" Sep 12 23:49:43.399971 containerd[1530]: 2025-09-12 23:49:43.370 [INFO][4831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-bn98f" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0" Sep 12 23:49:43.399971 containerd[1530]: 2025-09-12 23:49:43.371 [INFO][4831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-bn98f" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0", GenerateName:"calico-apiserver-6c649579cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"abe6524f-32ea-4fa9-86c5-c919366ca996", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c649579cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820", Pod:"calico-apiserver-6c649579cc-bn98f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0968460c639", MAC:"1a:5e:19:1b:e1:bb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:43.399971 containerd[1530]: 2025-09-12 23:49:43.393 [INFO][4831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-bn98f" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--bn98f-eth0" Sep 12 23:49:43.448906 systemd-networkd[1430]: cali98ead59b2c6: Link UP Sep 12 23:49:43.449083 systemd-networkd[1430]: cali98ead59b2c6: Gained carrier Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.018 [INFO][4838] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.038 [INFO][4838] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0 calico-apiserver-6c649579cc- calico-apiserver d22db48d-3cb6-4db1-8c4d-e3eb64653e01 841 0 2025-09-12 23:49:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c649579cc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-6c649579cc-twspx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali98ead59b2c6 [] [] }} ContainerID="755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-twspx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--twspx-" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.039 [INFO][4838] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-twspx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.096 [INFO][4866] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" HandleID="k8s-pod-network.755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" Workload="localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.096 [INFO][4866] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" HandleID="k8s-pod-network.755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" Workload="localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137dd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-6c649579cc-twspx", "timestamp":"2025-09-12 23:49:43.09648844 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.096 [INFO][4866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.364 [INFO][4866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.365 [INFO][4866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.393 [INFO][4866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" host="localhost" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.399 [INFO][4866] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.406 [INFO][4866] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.408 [INFO][4866] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.411 [INFO][4866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.412 [INFO][4866] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" host="localhost" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.413 [INFO][4866] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6 Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.425 [INFO][4866] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" host="localhost" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.443 [INFO][4866] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" host="localhost" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.443 [INFO][4866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" host="localhost" Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.443 [INFO][4866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 23:49:43.472552 containerd[1530]: 2025-09-12 23:49:43.443 [INFO][4866] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" HandleID="k8s-pod-network.755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" Workload="localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0" Sep 12 23:49:43.473097 containerd[1530]: 2025-09-12 23:49:43.445 [INFO][4838] cni-plugin/k8s.go 418: Populated endpoint ContainerID="755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-twspx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0", GenerateName:"calico-apiserver-6c649579cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d22db48d-3cb6-4db1-8c4d-e3eb64653e01", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c649579cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-6c649579cc-twspx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98ead59b2c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:43.473097 containerd[1530]: 2025-09-12 23:49:43.445 [INFO][4838] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-twspx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0" Sep 12 23:49:43.473097 containerd[1530]: 2025-09-12 23:49:43.445 [INFO][4838] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98ead59b2c6 ContainerID="755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-twspx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0" Sep 12 23:49:43.473097 containerd[1530]: 2025-09-12 23:49:43.449 [INFO][4838] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-twspx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0" Sep 12 23:49:43.473097 containerd[1530]: 2025-09-12 23:49:43.450 [INFO][4838] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-twspx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0", GenerateName:"calico-apiserver-6c649579cc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d22db48d-3cb6-4db1-8c4d-e3eb64653e01", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 23, 49, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c649579cc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6", Pod:"calico-apiserver-6c649579cc-twspx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali98ead59b2c6", MAC:"de:cd:d0:3f:c2:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 23:49:43.473097 containerd[1530]: 2025-09-12 23:49:43.469 [INFO][4838] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" Namespace="calico-apiserver" Pod="calico-apiserver-6c649579cc-twspx" WorkloadEndpoint="localhost-k8s-calico--apiserver--6c649579cc--twspx-eth0" Sep 12 23:49:43.537735 containerd[1530]: time="2025-09-12T23:49:43.537620857Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:43.545296 containerd[1530]: time="2025-09-12T23:49:43.545252218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 12 23:49:43.546403 containerd[1530]: time="2025-09-12T23:49:43.546344258Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:43.554839 containerd[1530]: time="2025-09-12T23:49:43.554677820Z" level=info msg="connecting to shim dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820" address="unix:///run/containerd/s/6b4163813adcff1fc56716083b6757c2af666193b4795c4fcf991d9273bc8239" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:43.557997 containerd[1530]: time="2025-09-12T23:49:43.557750380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:43.560252 containerd[1530]: time="2025-09-12T23:49:43.560180860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.587052603s" Sep 12 23:49:43.560252 containerd[1530]: time="2025-09-12T23:49:43.560217500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 12 23:49:43.562869 containerd[1530]: time="2025-09-12T23:49:43.562671541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 23:49:43.568566 containerd[1530]: time="2025-09-12T23:49:43.568074581Z" level=info msg="connecting to shim 755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6" address="unix:///run/containerd/s/01ef2827b2dac41ceaf72f33bf30fcd662968c714ea0fee040fda38a8a9a4799" namespace=k8s.io protocol=ttrpc version=3 Sep 12 23:49:43.575364 containerd[1530]: time="2025-09-12T23:49:43.575310862Z" level=info msg="CreateContainer within sandbox \"7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 23:49:43.596580 containerd[1530]: time="2025-09-12T23:49:43.596512825Z" level=info msg="Container 617b9cb6e35dfbd28521fee73dbea99217154127c083d836c9a28b0c5ff8bf6e: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:43.615369 systemd[1]: Started cri-containerd-755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6.scope - libcontainer container 755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6. Sep 12 23:49:43.617485 systemd[1]: Started cri-containerd-dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820.scope - libcontainer container dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820. Sep 12 23:49:43.622998 containerd[1530]: time="2025-09-12T23:49:43.622084588Z" level=info msg="CreateContainer within sandbox \"7d084352dd3cdeb6d91b366aa2c451e78e70a957c5e59080d987345bf81d2c79\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"617b9cb6e35dfbd28521fee73dbea99217154127c083d836c9a28b0c5ff8bf6e\"" Sep 12 23:49:43.625404 containerd[1530]: time="2025-09-12T23:49:43.624979149Z" level=info msg="StartContainer for \"617b9cb6e35dfbd28521fee73dbea99217154127c083d836c9a28b0c5ff8bf6e\"" Sep 12 23:49:43.628724 containerd[1530]: time="2025-09-12T23:49:43.628689149Z" level=info msg="connecting to shim 617b9cb6e35dfbd28521fee73dbea99217154127c083d836c9a28b0c5ff8bf6e" address="unix:///run/containerd/s/5a47f15e20324d8a0ae62e51b2e39bab30397101ee2a189f9d15aa634e80ab0e" protocol=ttrpc version=3 Sep 12 23:49:43.652699 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:43.659236 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 23:49:43.681411 systemd[1]: Started cri-containerd-617b9cb6e35dfbd28521fee73dbea99217154127c083d836c9a28b0c5ff8bf6e.scope - libcontainer container 617b9cb6e35dfbd28521fee73dbea99217154127c083d836c9a28b0c5ff8bf6e. Sep 12 23:49:43.729693 containerd[1530]: time="2025-09-12T23:49:43.729586802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c649579cc-bn98f,Uid:abe6524f-32ea-4fa9-86c5-c919366ca996,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820\"" Sep 12 23:49:43.746743 sshd[4881]: Connection closed by 10.0.0.1 port 41884 Sep 12 23:49:43.748307 sshd-session[4876]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:43.752627 systemd[1]: sshd@7-10.0.0.101:22-10.0.0.1:41884.service: Deactivated successfully. Sep 12 23:49:43.754302 containerd[1530]: time="2025-09-12T23:49:43.753542405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c649579cc-twspx,Uid:d22db48d-3cb6-4db1-8c4d-e3eb64653e01,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6\"" Sep 12 23:49:43.755771 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 23:49:43.759264 systemd-logind[1508]: Session 8 logged out. Waiting for processes to exit. Sep 12 23:49:43.760720 systemd-logind[1508]: Removed session 8. Sep 12 23:49:43.773388 containerd[1530]: time="2025-09-12T23:49:43.773336048Z" level=info msg="StartContainer for \"617b9cb6e35dfbd28521fee73dbea99217154127c083d836c9a28b0c5ff8bf6e\" returns successfully" Sep 12 23:49:43.986330 systemd-networkd[1430]: vxlan.calico: Link UP Sep 12 23:49:43.986336 systemd-networkd[1430]: vxlan.calico: Gained carrier Sep 12 23:49:44.084266 systemd-networkd[1430]: calid5de2fa5c43: Gained IPv6LL Sep 12 23:49:44.139089 kubelet[2670]: I0912 23:49:44.139055 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:44.156999 kubelet[2670]: I0912 23:49:44.156892 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-jbcmc" podStartSLOduration=20.144401303 podStartE2EDuration="24.156874816s" podCreationTimestamp="2025-09-12 23:49:20 +0000 UTC" firstStartedPulling="2025-09-12 23:49:39.550019108 +0000 UTC m=+39.673333796" lastFinishedPulling="2025-09-12 23:49:43.562492621 +0000 UTC m=+43.685807309" observedRunningTime="2025-09-12 23:49:44.155881776 +0000 UTC m=+44.279196464" watchObservedRunningTime="2025-09-12 23:49:44.156874816 +0000 UTC m=+44.280189504" Sep 12 23:49:44.239172 containerd[1530]: time="2025-09-12T23:49:44.239004866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5d512e61689f4c118cca2cb6893f13882fe7df182ffa8b4178a8b6b3f2223cd\" id:\"3762ab318afd7cf7ce4615216c0dacd5f89c58e136cf910d5070eaa50e640a65\" pid:5152 exit_status:1 exited_at:{seconds:1757720984 nanos:238480866}" Sep 12 23:49:44.352529 containerd[1530]: time="2025-09-12T23:49:44.352492760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5d512e61689f4c118cca2cb6893f13882fe7df182ffa8b4178a8b6b3f2223cd\" id:\"6eea26c21f45aac60560215883f7b5fd3b7895c4a45804eaa70a06339bae3e01\" pid:5205 exit_status:1 exited_at:{seconds:1757720984 nanos:352227960}" Sep 12 23:49:45.044317 systemd-networkd[1430]: cali0968460c639: Gained IPv6LL Sep 12 23:49:45.144685 kubelet[2670]: I0912 23:49:45.144633 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:45.173470 systemd-networkd[1430]: cali98ead59b2c6: Gained IPv6LL Sep 12 23:49:45.298128 containerd[1530]: time="2025-09-12T23:49:45.298005153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:45.299489 containerd[1530]: time="2025-09-12T23:49:45.299456353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 12 23:49:45.301450 containerd[1530]: time="2025-09-12T23:49:45.301409473Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:45.303673 containerd[1530]: time="2025-09-12T23:49:45.303635074Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:45.304551 containerd[1530]: time="2025-09-12T23:49:45.304510514Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.741806173s" Sep 12 23:49:45.304551 containerd[1530]: time="2025-09-12T23:49:45.304546874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 12 23:49:45.305548 containerd[1530]: time="2025-09-12T23:49:45.305474354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:49:45.310174 containerd[1530]: time="2025-09-12T23:49:45.310094274Z" level=info msg="CreateContainer within sandbox \"999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 23:49:45.318541 containerd[1530]: time="2025-09-12T23:49:45.318493515Z" level=info msg="Container aa45e22ee4dba838232036fb88a6baf515462956529e9e23c1787ba1004efd3c: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:45.329907 containerd[1530]: time="2025-09-12T23:49:45.329854197Z" level=info msg="CreateContainer within sandbox \"999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"aa45e22ee4dba838232036fb88a6baf515462956529e9e23c1787ba1004efd3c\"" Sep 12 23:49:45.330610 containerd[1530]: time="2025-09-12T23:49:45.330587117Z" level=info msg="StartContainer for \"aa45e22ee4dba838232036fb88a6baf515462956529e9e23c1787ba1004efd3c\"" Sep 12 23:49:45.331987 containerd[1530]: time="2025-09-12T23:49:45.331955797Z" level=info msg="connecting to shim aa45e22ee4dba838232036fb88a6baf515462956529e9e23c1787ba1004efd3c" address="unix:///run/containerd/s/84900631272ae95b0038b4f0500a613256596617651f1f2e704f67c1f94cd15b" protocol=ttrpc version=3 Sep 12 23:49:45.364394 systemd[1]: Started cri-containerd-aa45e22ee4dba838232036fb88a6baf515462956529e9e23c1787ba1004efd3c.scope - libcontainer container aa45e22ee4dba838232036fb88a6baf515462956529e9e23c1787ba1004efd3c. Sep 12 23:49:45.402627 containerd[1530]: time="2025-09-12T23:49:45.402578365Z" level=info msg="StartContainer for \"aa45e22ee4dba838232036fb88a6baf515462956529e9e23c1787ba1004efd3c\" returns successfully" Sep 12 23:49:45.556391 systemd-networkd[1430]: vxlan.calico: Gained IPv6LL Sep 12 23:49:46.878073 containerd[1530]: time="2025-09-12T23:49:46.878010527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:46.878576 containerd[1530]: time="2025-09-12T23:49:46.878535727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 12 23:49:46.879611 containerd[1530]: time="2025-09-12T23:49:46.879586087Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:46.881719 containerd[1530]: time="2025-09-12T23:49:46.881670007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:46.882319 containerd[1530]: time="2025-09-12T23:49:46.882283687Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.576761013s" Sep 12 23:49:46.882319 containerd[1530]: time="2025-09-12T23:49:46.882319127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:49:46.883335 containerd[1530]: time="2025-09-12T23:49:46.883311448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 23:49:46.886153 containerd[1530]: time="2025-09-12T23:49:46.886081448Z" level=info msg="CreateContainer within sandbox \"dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:49:46.892630 containerd[1530]: time="2025-09-12T23:49:46.891921488Z" level=info msg="Container 3826ccb27582ec2866669440220b67fef4fc352c0c1a29771fc46aef51e3d9b5: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:46.900392 containerd[1530]: time="2025-09-12T23:49:46.900343129Z" level=info msg="CreateContainer within sandbox \"dad81664bba51c4e9758d912ce2ee3582a2cfce8cd5d0fc78a2ec7e0e7671820\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3826ccb27582ec2866669440220b67fef4fc352c0c1a29771fc46aef51e3d9b5\"" Sep 12 23:49:46.901007 containerd[1530]: time="2025-09-12T23:49:46.900818609Z" level=info msg="StartContainer for \"3826ccb27582ec2866669440220b67fef4fc352c0c1a29771fc46aef51e3d9b5\"" Sep 12 23:49:46.902223 containerd[1530]: time="2025-09-12T23:49:46.902167370Z" level=info msg="connecting to shim 3826ccb27582ec2866669440220b67fef4fc352c0c1a29771fc46aef51e3d9b5" address="unix:///run/containerd/s/6b4163813adcff1fc56716083b6757c2af666193b4795c4fcf991d9273bc8239" protocol=ttrpc version=3 Sep 12 23:49:46.929375 systemd[1]: Started cri-containerd-3826ccb27582ec2866669440220b67fef4fc352c0c1a29771fc46aef51e3d9b5.scope - libcontainer container 3826ccb27582ec2866669440220b67fef4fc352c0c1a29771fc46aef51e3d9b5. Sep 12 23:49:47.022396 containerd[1530]: time="2025-09-12T23:49:47.022342542Z" level=info msg="StartContainer for \"3826ccb27582ec2866669440220b67fef4fc352c0c1a29771fc46aef51e3d9b5\" returns successfully" Sep 12 23:49:47.114276 containerd[1530]: time="2025-09-12T23:49:47.114229912Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:47.115392 containerd[1530]: time="2025-09-12T23:49:47.115363352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 23:49:47.116704 containerd[1530]: time="2025-09-12T23:49:47.116656952Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 233.314264ms" Sep 12 23:49:47.116744 containerd[1530]: time="2025-09-12T23:49:47.116715152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 12 23:49:47.119520 containerd[1530]: time="2025-09-12T23:49:47.119491512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 23:49:47.123579 containerd[1530]: time="2025-09-12T23:49:47.123546192Z" level=info msg="CreateContainer within sandbox \"755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 23:49:47.132097 containerd[1530]: time="2025-09-12T23:49:47.131975153Z" level=info msg="Container 88686a5cb83568ba1c4a6afb48ef24c4f11cfcce74cfd05c689353a379ca79b1: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:47.144426 containerd[1530]: time="2025-09-12T23:49:47.144381315Z" level=info msg="CreateContainer within sandbox \"755f28966d825ce77f8078b63443ee875e3caca6b409bdf5f2d83cca9209cec6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"88686a5cb83568ba1c4a6afb48ef24c4f11cfcce74cfd05c689353a379ca79b1\"" Sep 12 23:49:47.145095 containerd[1530]: time="2025-09-12T23:49:47.145068115Z" level=info msg="StartContainer for \"88686a5cb83568ba1c4a6afb48ef24c4f11cfcce74cfd05c689353a379ca79b1\"" Sep 12 23:49:47.146142 containerd[1530]: time="2025-09-12T23:49:47.146109795Z" level=info msg="connecting to shim 88686a5cb83568ba1c4a6afb48ef24c4f11cfcce74cfd05c689353a379ca79b1" address="unix:///run/containerd/s/01ef2827b2dac41ceaf72f33bf30fcd662968c714ea0fee040fda38a8a9a4799" protocol=ttrpc version=3 Sep 12 23:49:47.176410 systemd[1]: Started cri-containerd-88686a5cb83568ba1c4a6afb48ef24c4f11cfcce74cfd05c689353a379ca79b1.scope - libcontainer container 88686a5cb83568ba1c4a6afb48ef24c4f11cfcce74cfd05c689353a379ca79b1. Sep 12 23:49:47.184627 kubelet[2670]: I0912 23:49:47.184564 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c649579cc-bn98f" podStartSLOduration=27.033192313 podStartE2EDuration="30.184550399s" podCreationTimestamp="2025-09-12 23:49:17 +0000 UTC" firstStartedPulling="2025-09-12 23:49:43.731597482 +0000 UTC m=+43.854912130" lastFinishedPulling="2025-09-12 23:49:46.882955528 +0000 UTC m=+47.006270216" observedRunningTime="2025-09-12 23:49:47.181894078 +0000 UTC m=+47.305208766" watchObservedRunningTime="2025-09-12 23:49:47.184550399 +0000 UTC m=+47.307865087" Sep 12 23:49:47.324050 containerd[1530]: time="2025-09-12T23:49:47.323946133Z" level=info msg="StartContainer for \"88686a5cb83568ba1c4a6afb48ef24c4f11cfcce74cfd05c689353a379ca79b1\" returns successfully" Sep 12 23:49:48.165332 kubelet[2670]: I0912 23:49:48.165293 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:48.185664 containerd[1530]: time="2025-09-12T23:49:48.185615338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:48.186867 containerd[1530]: time="2025-09-12T23:49:48.186820778Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 12 23:49:48.188433 containerd[1530]: time="2025-09-12T23:49:48.188392738Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:48.192894 containerd[1530]: time="2025-09-12T23:49:48.192813778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 23:49:48.195170 containerd[1530]: time="2025-09-12T23:49:48.195122659Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.075506547s" Sep 12 23:49:48.195378 containerd[1530]: time="2025-09-12T23:49:48.195172299Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 12 23:49:48.201131 containerd[1530]: time="2025-09-12T23:49:48.200876579Z" level=info msg="CreateContainer within sandbox \"999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 23:49:48.211500 containerd[1530]: time="2025-09-12T23:49:48.211459020Z" level=info msg="Container 7305fd6912d74947432f709575e4e24cacf71ce2e46ece8c5a6af2610371cda1: CDI devices from CRI Config.CDIDevices: []" Sep 12 23:49:48.234457 containerd[1530]: time="2025-09-12T23:49:48.234394022Z" level=info msg="CreateContainer within sandbox \"999b68e3072c6470abc9ddcac87af71e986aef27d2c8313627a1f3640b5ef141\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7305fd6912d74947432f709575e4e24cacf71ce2e46ece8c5a6af2610371cda1\"" Sep 12 23:49:48.235239 containerd[1530]: time="2025-09-12T23:49:48.235194062Z" level=info msg="StartContainer for \"7305fd6912d74947432f709575e4e24cacf71ce2e46ece8c5a6af2610371cda1\"" Sep 12 23:49:48.237239 containerd[1530]: time="2025-09-12T23:49:48.237177823Z" level=info msg="connecting to shim 7305fd6912d74947432f709575e4e24cacf71ce2e46ece8c5a6af2610371cda1" address="unix:///run/containerd/s/84900631272ae95b0038b4f0500a613256596617651f1f2e704f67c1f94cd15b" protocol=ttrpc version=3 Sep 12 23:49:48.272362 systemd[1]: Started cri-containerd-7305fd6912d74947432f709575e4e24cacf71ce2e46ece8c5a6af2610371cda1.scope - libcontainer container 7305fd6912d74947432f709575e4e24cacf71ce2e46ece8c5a6af2610371cda1. Sep 12 23:49:48.345237 containerd[1530]: time="2025-09-12T23:49:48.345181473Z" level=info msg="StartContainer for \"7305fd6912d74947432f709575e4e24cacf71ce2e46ece8c5a6af2610371cda1\" returns successfully" Sep 12 23:49:48.764292 systemd[1]: Started sshd@8-10.0.0.101:22-10.0.0.1:41900.service - OpenSSH per-connection server daemon (10.0.0.1:41900). Sep 12 23:49:48.859419 sshd[5387]: Accepted publickey for core from 10.0.0.1 port 41900 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:48.862393 sshd-session[5387]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:48.871301 systemd-logind[1508]: New session 9 of user core. Sep 12 23:49:48.883476 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 23:49:49.062463 kubelet[2670]: I0912 23:49:49.062224 2670 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 23:49:49.068245 kubelet[2670]: I0912 23:49:49.068041 2670 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 23:49:49.173623 kubelet[2670]: I0912 23:49:49.173092 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:49.200042 kubelet[2670]: I0912 23:49:49.199969 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6grrt" podStartSLOduration=23.78602925 podStartE2EDuration="29.199948592s" podCreationTimestamp="2025-09-12 23:49:20 +0000 UTC" firstStartedPulling="2025-09-12 23:49:42.782559717 +0000 UTC m=+42.905874405" lastFinishedPulling="2025-09-12 23:49:48.196479099 +0000 UTC m=+48.319793747" observedRunningTime="2025-09-12 23:49:49.197987032 +0000 UTC m=+49.321301760" watchObservedRunningTime="2025-09-12 23:49:49.199948592 +0000 UTC m=+49.323263280" Sep 12 23:49:49.200486 kubelet[2670]: I0912 23:49:49.200451 2670 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c649579cc-twspx" podStartSLOduration=28.838731526 podStartE2EDuration="32.200442232s" podCreationTimestamp="2025-09-12 23:49:17 +0000 UTC" firstStartedPulling="2025-09-12 23:49:43.755677686 +0000 UTC m=+43.878992374" lastFinishedPulling="2025-09-12 23:49:47.117388392 +0000 UTC m=+47.240703080" observedRunningTime="2025-09-12 23:49:48.180754937 +0000 UTC m=+48.304069625" watchObservedRunningTime="2025-09-12 23:49:49.200442232 +0000 UTC m=+49.323756920" Sep 12 23:49:49.208362 sshd[5389]: Connection closed by 10.0.0.1 port 41900 Sep 12 23:49:49.209267 sshd-session[5387]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:49.214344 systemd[1]: sshd@8-10.0.0.101:22-10.0.0.1:41900.service: Deactivated successfully. Sep 12 23:49:49.217154 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 23:49:49.218010 systemd-logind[1508]: Session 9 logged out. Waiting for processes to exit. Sep 12 23:49:49.219547 systemd-logind[1508]: Removed session 9. Sep 12 23:49:52.951017 kubelet[2670]: I0912 23:49:52.950947 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:49:53.061849 containerd[1530]: time="2025-09-12T23:49:53.061794859Z" level=info msg="TaskExit event in podsandbox handler container_id:\"617b9cb6e35dfbd28521fee73dbea99217154127c083d836c9a28b0c5ff8bf6e\" id:\"257f27a475cbaf2777f070e55eea6853f019a5576a8b00e64c874b1e0efe4271\" pid:5427 exited_at:{seconds:1757720993 nanos:56382619}" Sep 12 23:49:53.144469 containerd[1530]: time="2025-09-12T23:49:53.144425745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"617b9cb6e35dfbd28521fee73dbea99217154127c083d836c9a28b0c5ff8bf6e\" id:\"ec84801d6f470f8f6fbede919a52a43ce8534648b540ee88fd3ff8cdf71f3a89\" pid:5451 exited_at:{seconds:1757720993 nanos:144039185}" Sep 12 23:49:54.228258 systemd[1]: Started sshd@9-10.0.0.101:22-10.0.0.1:42484.service - OpenSSH per-connection server daemon (10.0.0.1:42484). Sep 12 23:49:54.302653 sshd[5466]: Accepted publickey for core from 10.0.0.1 port 42484 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:54.311272 sshd-session[5466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:54.317500 systemd-logind[1508]: New session 10 of user core. Sep 12 23:49:54.326369 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 23:49:54.495736 sshd[5474]: Connection closed by 10.0.0.1 port 42484 Sep 12 23:49:54.497078 sshd-session[5466]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:54.511905 systemd[1]: sshd@9-10.0.0.101:22-10.0.0.1:42484.service: Deactivated successfully. Sep 12 23:49:54.514740 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 23:49:54.515576 systemd-logind[1508]: Session 10 logged out. Waiting for processes to exit. Sep 12 23:49:54.519817 systemd[1]: Started sshd@10-10.0.0.101:22-10.0.0.1:42492.service - OpenSSH per-connection server daemon (10.0.0.1:42492). Sep 12 23:49:54.520353 systemd-logind[1508]: Removed session 10. Sep 12 23:49:54.598726 sshd[5490]: Accepted publickey for core from 10.0.0.1 port 42492 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:54.600211 sshd-session[5490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:54.604994 systemd-logind[1508]: New session 11 of user core. Sep 12 23:49:54.617447 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 23:49:54.862641 sshd[5492]: Connection closed by 10.0.0.1 port 42492 Sep 12 23:49:54.863322 sshd-session[5490]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:54.874521 systemd[1]: sshd@10-10.0.0.101:22-10.0.0.1:42492.service: Deactivated successfully. Sep 12 23:49:54.877623 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 23:49:54.879558 systemd-logind[1508]: Session 11 logged out. Waiting for processes to exit. Sep 12 23:49:54.885931 systemd[1]: Started sshd@11-10.0.0.101:22-10.0.0.1:42502.service - OpenSSH per-connection server daemon (10.0.0.1:42502). Sep 12 23:49:54.889213 systemd-logind[1508]: Removed session 11. Sep 12 23:49:54.951301 sshd[5505]: Accepted publickey for core from 10.0.0.1 port 42502 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:49:54.952768 sshd-session[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:49:54.957256 systemd-logind[1508]: New session 12 of user core. Sep 12 23:49:54.972389 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 23:49:55.130585 sshd[5507]: Connection closed by 10.0.0.1 port 42502 Sep 12 23:49:55.130937 sshd-session[5505]: pam_unix(sshd:session): session closed for user core Sep 12 23:49:55.135058 systemd[1]: sshd@11-10.0.0.101:22-10.0.0.1:42502.service: Deactivated successfully. Sep 12 23:49:55.136931 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 23:49:55.139048 systemd-logind[1508]: Session 12 logged out. Waiting for processes to exit. Sep 12 23:49:55.140762 systemd-logind[1508]: Removed session 12. Sep 12 23:50:00.148007 systemd[1]: Started sshd@12-10.0.0.101:22-10.0.0.1:55526.service - OpenSSH per-connection server daemon (10.0.0.1:55526). Sep 12 23:50:00.235436 sshd[5528]: Accepted publickey for core from 10.0.0.1 port 55526 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:00.236802 sshd-session[5528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:00.242655 systemd-logind[1508]: New session 13 of user core. Sep 12 23:50:00.266380 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 23:50:00.422574 sshd[5530]: Connection closed by 10.0.0.1 port 55526 Sep 12 23:50:00.422363 sshd-session[5528]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:00.431668 systemd[1]: sshd@12-10.0.0.101:22-10.0.0.1:55526.service: Deactivated successfully. Sep 12 23:50:00.433739 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 23:50:00.435901 systemd-logind[1508]: Session 13 logged out. Waiting for processes to exit. Sep 12 23:50:00.438914 systemd[1]: Started sshd@13-10.0.0.101:22-10.0.0.1:55530.service - OpenSSH per-connection server daemon (10.0.0.1:55530). Sep 12 23:50:00.440869 systemd-logind[1508]: Removed session 13. Sep 12 23:50:00.509447 sshd[5543]: Accepted publickey for core from 10.0.0.1 port 55530 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:00.510905 sshd-session[5543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:00.515599 systemd-logind[1508]: New session 14 of user core. Sep 12 23:50:00.532334 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 23:50:00.755662 sshd[5545]: Connection closed by 10.0.0.1 port 55530 Sep 12 23:50:00.756436 sshd-session[5543]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:00.769725 systemd[1]: sshd@13-10.0.0.101:22-10.0.0.1:55530.service: Deactivated successfully. Sep 12 23:50:00.772150 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 23:50:00.773161 systemd-logind[1508]: Session 14 logged out. Waiting for processes to exit. Sep 12 23:50:00.776763 systemd[1]: Started sshd@14-10.0.0.101:22-10.0.0.1:55544.service - OpenSSH per-connection server daemon (10.0.0.1:55544). Sep 12 23:50:00.778034 systemd-logind[1508]: Removed session 14. Sep 12 23:50:00.833319 sshd[5556]: Accepted publickey for core from 10.0.0.1 port 55544 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:00.835945 sshd-session[5556]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:00.841876 systemd-logind[1508]: New session 15 of user core. Sep 12 23:50:00.856350 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 23:50:01.178375 kubelet[2670]: I0912 23:50:01.176525 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:50:01.486987 sshd[5558]: Connection closed by 10.0.0.1 port 55544 Sep 12 23:50:01.487329 sshd-session[5556]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:01.497414 systemd[1]: sshd@14-10.0.0.101:22-10.0.0.1:55544.service: Deactivated successfully. Sep 12 23:50:01.502485 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 23:50:01.504849 systemd-logind[1508]: Session 15 logged out. Waiting for processes to exit. Sep 12 23:50:01.512482 systemd[1]: Started sshd@15-10.0.0.101:22-10.0.0.1:55546.service - OpenSSH per-connection server daemon (10.0.0.1:55546). Sep 12 23:50:01.514811 systemd-logind[1508]: Removed session 15. Sep 12 23:50:01.566254 sshd[5578]: Accepted publickey for core from 10.0.0.1 port 55546 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:01.567754 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:01.571805 systemd-logind[1508]: New session 16 of user core. Sep 12 23:50:01.590341 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 23:50:01.872030 sshd[5580]: Connection closed by 10.0.0.1 port 55546 Sep 12 23:50:01.871598 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:01.881365 systemd[1]: sshd@15-10.0.0.101:22-10.0.0.1:55546.service: Deactivated successfully. Sep 12 23:50:01.884524 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 23:50:01.886729 systemd-logind[1508]: Session 16 logged out. Waiting for processes to exit. Sep 12 23:50:01.891581 systemd[1]: Started sshd@16-10.0.0.101:22-10.0.0.1:55554.service - OpenSSH per-connection server daemon (10.0.0.1:55554). Sep 12 23:50:01.894763 systemd-logind[1508]: Removed session 16. Sep 12 23:50:01.953996 sshd[5592]: Accepted publickey for core from 10.0.0.1 port 55554 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:01.955522 sshd-session[5592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:01.961686 systemd-logind[1508]: New session 17 of user core. Sep 12 23:50:01.966368 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 23:50:02.010057 containerd[1530]: time="2025-09-12T23:50:02.010010175Z" level=info msg="TaskExit event in podsandbox handler container_id:\"617b9cb6e35dfbd28521fee73dbea99217154127c083d836c9a28b0c5ff8bf6e\" id:\"a82b820ffbe97dbc4ecc7ee7e76a0780d13dd857c3e8a1666a3673d6144dce90\" pid:5607 exited_at:{seconds:1757721002 nanos:9760695}" Sep 12 23:50:02.109201 sshd[5617]: Connection closed by 10.0.0.1 port 55554 Sep 12 23:50:02.109527 sshd-session[5592]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:02.113979 systemd[1]: sshd@16-10.0.0.101:22-10.0.0.1:55554.service: Deactivated successfully. Sep 12 23:50:02.116192 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 23:50:02.117003 systemd-logind[1508]: Session 17 logged out. Waiting for processes to exit. Sep 12 23:50:02.118189 systemd-logind[1508]: Removed session 17. Sep 12 23:50:07.128515 systemd[1]: Started sshd@17-10.0.0.101:22-10.0.0.1:55562.service - OpenSSH per-connection server daemon (10.0.0.1:55562). Sep 12 23:50:07.212277 sshd[5643]: Accepted publickey for core from 10.0.0.1 port 55562 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:07.214773 sshd-session[5643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:07.223341 systemd-logind[1508]: New session 18 of user core. Sep 12 23:50:07.229385 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 23:50:07.374869 sshd[5645]: Connection closed by 10.0.0.1 port 55562 Sep 12 23:50:07.375238 sshd-session[5643]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:07.379354 systemd[1]: sshd@17-10.0.0.101:22-10.0.0.1:55562.service: Deactivated successfully. Sep 12 23:50:07.381154 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 23:50:07.384356 systemd-logind[1508]: Session 18 logged out. Waiting for processes to exit. Sep 12 23:50:07.385461 systemd-logind[1508]: Removed session 18. Sep 12 23:50:11.373923 kubelet[2670]: I0912 23:50:11.373877 2670 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 23:50:12.170703 containerd[1530]: time="2025-09-12T23:50:12.170656698Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1c2c93620dfe357245935077ac280d4a4fcfd64463fe3a88ff7e6b364d589b1\" id:\"97310a8f7a14ab981be48cc5cf5273ef9f328870a594cbc529599b11e51d1978\" pid:5676 exited_at:{seconds:1757721012 nanos:170337336}" Sep 12 23:50:12.387664 systemd[1]: Started sshd@18-10.0.0.101:22-10.0.0.1:37580.service - OpenSSH per-connection server daemon (10.0.0.1:37580). Sep 12 23:50:12.440856 sshd[5687]: Accepted publickey for core from 10.0.0.1 port 37580 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:12.442609 sshd-session[5687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:12.447087 systemd-logind[1508]: New session 19 of user core. Sep 12 23:50:12.459351 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 23:50:12.592400 sshd[5689]: Connection closed by 10.0.0.1 port 37580 Sep 12 23:50:12.592722 sshd-session[5687]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:12.596412 systemd[1]: sshd@18-10.0.0.101:22-10.0.0.1:37580.service: Deactivated successfully. Sep 12 23:50:12.598124 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 23:50:12.600905 systemd-logind[1508]: Session 19 logged out. Waiting for processes to exit. Sep 12 23:50:12.602626 systemd-logind[1508]: Removed session 19. Sep 12 23:50:14.367267 containerd[1530]: time="2025-09-12T23:50:14.367199073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a5d512e61689f4c118cca2cb6893f13882fe7df182ffa8b4178a8b6b3f2223cd\" id:\"8f1f2f87cdad6e284addf1584045754fae88bcecc1ecc68972b8c36567917122\" pid:5716 exited_at:{seconds:1757721014 nanos:366878792}" Sep 12 23:50:17.611938 systemd[1]: Started sshd@19-10.0.0.101:22-10.0.0.1:37588.service - OpenSSH per-connection server daemon (10.0.0.1:37588). Sep 12 23:50:17.686134 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 37588 ssh2: RSA SHA256:U495jLcrOdK3hoPgih3/zUS8L+hgQo+VhebSoZqpcKw Sep 12 23:50:17.687951 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 23:50:17.693221 systemd-logind[1508]: New session 20 of user core. Sep 12 23:50:17.701717 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 23:50:17.907080 sshd[5731]: Connection closed by 10.0.0.1 port 37588 Sep 12 23:50:17.907814 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Sep 12 23:50:17.912024 systemd-logind[1508]: Session 20 logged out. Waiting for processes to exit. Sep 12 23:50:17.912313 systemd[1]: sshd@19-10.0.0.101:22-10.0.0.1:37588.service: Deactivated successfully. Sep 12 23:50:17.914120 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 23:50:17.916069 systemd-logind[1508]: Removed session 20.