Sep 10 23:22:06.792264 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 10 23:22:06.792284 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Wed Sep 10 22:08:24 -00 2025 Sep 10 23:22:06.792294 kernel: KASLR enabled Sep 10 23:22:06.792299 kernel: efi: EFI v2.7 by EDK II Sep 10 23:22:06.792305 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 10 23:22:06.792310 kernel: random: crng init done Sep 10 23:22:06.792317 kernel: secureboot: Secure boot disabled Sep 10 23:22:06.792322 kernel: ACPI: Early table checksum verification disabled Sep 10 23:22:06.792330 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 10 23:22:06.792338 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 10 23:22:06.792345 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:22:06.792352 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:22:06.792357 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:22:06.792363 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:22:06.792370 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:22:06.792377 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:22:06.792383 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:22:06.792389 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:22:06.792395 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 10 23:22:06.792401 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 10 23:22:06.792407 kernel: ACPI: Use ACPI SPCR as default console: No Sep 10 23:22:06.792413 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:22:06.792419 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 10 23:22:06.792425 kernel: Zone ranges: Sep 10 23:22:06.792432 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:22:06.792439 kernel: DMA32 empty Sep 10 23:22:06.792445 kernel: Normal empty Sep 10 23:22:06.792451 kernel: Device empty Sep 10 23:22:06.792457 kernel: Movable zone start for each node Sep 10 23:22:06.792462 kernel: Early memory node ranges Sep 10 23:22:06.792468 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 10 23:22:06.792474 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 10 23:22:06.792480 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 10 23:22:06.792486 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 10 23:22:06.792492 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 10 23:22:06.792498 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 10 23:22:06.792504 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 10 23:22:06.792512 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 10 23:22:06.792518 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 10 23:22:06.792524 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 10 23:22:06.792533 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 10 23:22:06.792539 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 10 23:22:06.792545 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 10 23:22:06.792553 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 10 23:22:06.792560 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 10 23:22:06.792566 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 10 23:22:06.792573 kernel: psci: probing for conduit method from ACPI. Sep 10 23:22:06.792579 kernel: psci: PSCIv1.1 detected in firmware. Sep 10 23:22:06.792586 kernel: psci: Using standard PSCI v0.2 function IDs Sep 10 23:22:06.792593 kernel: psci: Trusted OS migration not required Sep 10 23:22:06.792599 kernel: psci: SMC Calling Convention v1.1 Sep 10 23:22:06.792606 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 10 23:22:06.792612 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 10 23:22:06.792620 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 10 23:22:06.792626 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 10 23:22:06.792632 kernel: Detected PIPT I-cache on CPU0 Sep 10 23:22:06.792639 kernel: CPU features: detected: GIC system register CPU interface Sep 10 23:22:06.792645 kernel: CPU features: detected: Spectre-v4 Sep 10 23:22:06.792651 kernel: CPU features: detected: Spectre-BHB Sep 10 23:22:06.792658 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 10 23:22:06.792664 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 10 23:22:06.792670 kernel: CPU features: detected: ARM erratum 1418040 Sep 10 23:22:06.792676 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 10 23:22:06.792683 kernel: alternatives: applying boot alternatives Sep 10 23:22:06.792690 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa1cdbdcf235a334637eb5be2b0973f49e389ed29b057fae47365cdb3976f114 Sep 10 23:22:06.792698 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 10 23:22:06.792704 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 10 23:22:06.792711 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 10 23:22:06.792717 kernel: Fallback order for Node 0: 0 Sep 10 23:22:06.792723 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 10 23:22:06.792729 kernel: Policy zone: DMA Sep 10 23:22:06.792745 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 10 23:22:06.792752 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 10 23:22:06.792758 kernel: software IO TLB: area num 4. Sep 10 23:22:06.792764 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 10 23:22:06.792771 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 10 23:22:06.792780 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 10 23:22:06.792786 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 10 23:22:06.792793 kernel: rcu: RCU event tracing is enabled. Sep 10 23:22:06.792800 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 10 23:22:06.792806 kernel: Trampoline variant of Tasks RCU enabled. Sep 10 23:22:06.792813 kernel: Tracing variant of Tasks RCU enabled. Sep 10 23:22:06.792819 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 10 23:22:06.792825 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 10 23:22:06.792832 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:22:06.792838 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 10 23:22:06.792845 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 10 23:22:06.792852 kernel: GICv3: 256 SPIs implemented Sep 10 23:22:06.792859 kernel: GICv3: 0 Extended SPIs implemented Sep 10 23:22:06.792865 kernel: Root IRQ handler: gic_handle_irq Sep 10 23:22:06.792871 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 10 23:22:06.792877 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 10 23:22:06.792884 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 10 23:22:06.792890 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 10 23:22:06.792896 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 10 23:22:06.792903 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 10 23:22:06.792909 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 10 23:22:06.792916 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 10 23:22:06.792922 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 10 23:22:06.792930 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:22:06.792936 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 10 23:22:06.792943 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 10 23:22:06.792949 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 10 23:22:06.792956 kernel: arm-pv: using stolen time PV Sep 10 23:22:06.792963 kernel: Console: colour dummy device 80x25 Sep 10 23:22:06.792969 kernel: ACPI: Core revision 20240827 Sep 10 23:22:06.792976 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 10 23:22:06.792982 kernel: pid_max: default: 32768 minimum: 301 Sep 10 23:22:06.792989 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 10 23:22:06.792997 kernel: landlock: Up and running. Sep 10 23:22:06.793004 kernel: SELinux: Initializing. Sep 10 23:22:06.793010 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:22:06.793017 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 10 23:22:06.793023 kernel: rcu: Hierarchical SRCU implementation. Sep 10 23:22:06.793030 kernel: rcu: Max phase no-delay instances is 400. Sep 10 23:22:06.793037 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 10 23:22:06.793043 kernel: Remapping and enabling EFI services. Sep 10 23:22:06.793050 kernel: smp: Bringing up secondary CPUs ... Sep 10 23:22:06.793181 kernel: Detected PIPT I-cache on CPU1 Sep 10 23:22:06.793190 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 10 23:22:06.793197 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 10 23:22:06.793206 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:22:06.793213 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 10 23:22:06.793220 kernel: Detected PIPT I-cache on CPU2 Sep 10 23:22:06.793227 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 10 23:22:06.793234 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 10 23:22:06.793242 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:22:06.793248 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 10 23:22:06.793255 kernel: Detected PIPT I-cache on CPU3 Sep 10 23:22:06.793262 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 10 23:22:06.793269 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 10 23:22:06.793276 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 10 23:22:06.793283 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 10 23:22:06.793290 kernel: smp: Brought up 1 node, 4 CPUs Sep 10 23:22:06.793297 kernel: SMP: Total of 4 processors activated. Sep 10 23:22:06.793368 kernel: CPU: All CPU(s) started at EL1 Sep 10 23:22:06.793376 kernel: CPU features: detected: 32-bit EL0 Support Sep 10 23:22:06.793383 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 10 23:22:06.793390 kernel: CPU features: detected: Common not Private translations Sep 10 23:22:06.793397 kernel: CPU features: detected: CRC32 instructions Sep 10 23:22:06.793404 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 10 23:22:06.793411 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 10 23:22:06.793417 kernel: CPU features: detected: LSE atomic instructions Sep 10 23:22:06.793424 kernel: CPU features: detected: Privileged Access Never Sep 10 23:22:06.793433 kernel: CPU features: detected: RAS Extension Support Sep 10 23:22:06.793440 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 10 23:22:06.793447 kernel: alternatives: applying system-wide alternatives Sep 10 23:22:06.793453 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 10 23:22:06.793461 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9064K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 10 23:22:06.793469 kernel: devtmpfs: initialized Sep 10 23:22:06.793476 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 10 23:22:06.793483 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 10 23:22:06.793490 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 10 23:22:06.793498 kernel: 0 pages in range for non-PLT usage Sep 10 23:22:06.793505 kernel: 508576 pages in range for PLT usage Sep 10 23:22:06.793512 kernel: pinctrl core: initialized pinctrl subsystem Sep 10 23:22:06.793520 kernel: SMBIOS 3.0.0 present. Sep 10 23:22:06.793527 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 10 23:22:06.793534 kernel: DMI: Memory slots populated: 1/1 Sep 10 23:22:06.793540 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 10 23:22:06.793548 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 10 23:22:06.793555 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 10 23:22:06.793564 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 10 23:22:06.793571 kernel: audit: initializing netlink subsys (disabled) Sep 10 23:22:06.793578 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Sep 10 23:22:06.793585 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 10 23:22:06.793592 kernel: cpuidle: using governor menu Sep 10 23:22:06.793599 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 10 23:22:06.793606 kernel: ASID allocator initialised with 32768 entries Sep 10 23:22:06.793613 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 10 23:22:06.793620 kernel: Serial: AMBA PL011 UART driver Sep 10 23:22:06.793628 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 10 23:22:06.793703 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 10 23:22:06.793715 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 10 23:22:06.793722 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 10 23:22:06.793729 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 10 23:22:06.793742 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 10 23:22:06.793751 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 10 23:22:06.793758 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 10 23:22:06.793764 kernel: ACPI: Added _OSI(Module Device) Sep 10 23:22:06.793774 kernel: ACPI: Added _OSI(Processor Device) Sep 10 23:22:06.793781 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 10 23:22:06.793788 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 10 23:22:06.793795 kernel: ACPI: Interpreter enabled Sep 10 23:22:06.793803 kernel: ACPI: Using GIC for interrupt routing Sep 10 23:22:06.793809 kernel: ACPI: MCFG table detected, 1 entries Sep 10 23:22:06.793816 kernel: ACPI: CPU0 has been hot-added Sep 10 23:22:06.793823 kernel: ACPI: CPU1 has been hot-added Sep 10 23:22:06.793830 kernel: ACPI: CPU2 has been hot-added Sep 10 23:22:06.793837 kernel: ACPI: CPU3 has been hot-added Sep 10 23:22:06.793845 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 10 23:22:06.793852 kernel: printk: legacy console [ttyAMA0] enabled Sep 10 23:22:06.793859 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 10 23:22:06.794004 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 10 23:22:06.794071 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 10 23:22:06.794131 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 10 23:22:06.794761 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 10 23:22:06.794828 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 10 23:22:06.794838 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 10 23:22:06.794845 kernel: PCI host bridge to bus 0000:00 Sep 10 23:22:06.794913 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 10 23:22:06.794968 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 10 23:22:06.795022 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 10 23:22:06.795076 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 10 23:22:06.795241 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 10 23:22:06.795323 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 10 23:22:06.795388 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 10 23:22:06.795452 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 10 23:22:06.795575 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 10 23:22:06.795636 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 10 23:22:06.795704 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 10 23:22:06.795779 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 10 23:22:06.795938 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 10 23:22:06.796009 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 10 23:22:06.796062 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 10 23:22:06.796072 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 10 23:22:06.796079 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 10 23:22:06.796086 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 10 23:22:06.796097 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 10 23:22:06.796104 kernel: iommu: Default domain type: Translated Sep 10 23:22:06.796111 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 10 23:22:06.796118 kernel: efivars: Registered efivars operations Sep 10 23:22:06.796125 kernel: vgaarb: loaded Sep 10 23:22:06.796132 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 10 23:22:06.796154 kernel: VFS: Disk quotas dquot_6.6.0 Sep 10 23:22:06.796161 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 10 23:22:06.796168 kernel: pnp: PnP ACPI init Sep 10 23:22:06.796245 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 10 23:22:06.796256 kernel: pnp: PnP ACPI: found 1 devices Sep 10 23:22:06.796263 kernel: NET: Registered PF_INET protocol family Sep 10 23:22:06.796270 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 10 23:22:06.796277 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 10 23:22:06.796285 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 10 23:22:06.796292 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 10 23:22:06.796300 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 10 23:22:06.796308 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 10 23:22:06.796315 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:22:06.796322 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 10 23:22:06.796330 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 10 23:22:06.796337 kernel: PCI: CLS 0 bytes, default 64 Sep 10 23:22:06.796344 kernel: kvm [1]: HYP mode not available Sep 10 23:22:06.796351 kernel: Initialise system trusted keyrings Sep 10 23:22:06.796358 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 10 23:22:06.796365 kernel: Key type asymmetric registered Sep 10 23:22:06.796373 kernel: Asymmetric key parser 'x509' registered Sep 10 23:22:06.796380 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 10 23:22:06.796387 kernel: io scheduler mq-deadline registered Sep 10 23:22:06.796394 kernel: io scheduler kyber registered Sep 10 23:22:06.796401 kernel: io scheduler bfq registered Sep 10 23:22:06.796413 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 10 23:22:06.796420 kernel: ACPI: button: Power Button [PWRB] Sep 10 23:22:06.796428 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 10 23:22:06.796490 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 10 23:22:06.796501 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 10 23:22:06.796509 kernel: thunder_xcv, ver 1.0 Sep 10 23:22:06.796515 kernel: thunder_bgx, ver 1.0 Sep 10 23:22:06.796523 kernel: nicpf, ver 1.0 Sep 10 23:22:06.796529 kernel: nicvf, ver 1.0 Sep 10 23:22:06.796615 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 10 23:22:06.796671 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-10T23:22:06 UTC (1757546526) Sep 10 23:22:06.796680 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 10 23:22:06.796687 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 10 23:22:06.796696 kernel: watchdog: NMI not fully supported Sep 10 23:22:06.796703 kernel: watchdog: Hard watchdog permanently disabled Sep 10 23:22:06.796710 kernel: NET: Registered PF_INET6 protocol family Sep 10 23:22:06.796717 kernel: Segment Routing with IPv6 Sep 10 23:22:06.796724 kernel: In-situ OAM (IOAM) with IPv6 Sep 10 23:22:06.796731 kernel: NET: Registered PF_PACKET protocol family Sep 10 23:22:06.796748 kernel: Key type dns_resolver registered Sep 10 23:22:06.796755 kernel: registered taskstats version 1 Sep 10 23:22:06.796762 kernel: Loading compiled-in X.509 certificates Sep 10 23:22:06.796772 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: 614348c8450ce34f552a2f872e2a442c01d91c4b' Sep 10 23:22:06.796779 kernel: Demotion targets for Node 0: null Sep 10 23:22:06.796786 kernel: Key type .fscrypt registered Sep 10 23:22:06.796792 kernel: Key type fscrypt-provisioning registered Sep 10 23:22:06.796799 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 10 23:22:06.796807 kernel: ima: Allocated hash algorithm: sha1 Sep 10 23:22:06.796813 kernel: ima: No architecture policies found Sep 10 23:22:06.796820 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 10 23:22:06.796828 kernel: clk: Disabling unused clocks Sep 10 23:22:06.796835 kernel: PM: genpd: Disabling unused power domains Sep 10 23:22:06.796842 kernel: Warning: unable to open an initial console. Sep 10 23:22:06.796850 kernel: Freeing unused kernel memory: 38912K Sep 10 23:22:06.796857 kernel: Run /init as init process Sep 10 23:22:06.796863 kernel: with arguments: Sep 10 23:22:06.796870 kernel: /init Sep 10 23:22:06.796877 kernel: with environment: Sep 10 23:22:06.796884 kernel: HOME=/ Sep 10 23:22:06.796891 kernel: TERM=linux Sep 10 23:22:06.796899 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 10 23:22:06.796907 systemd[1]: Successfully made /usr/ read-only. Sep 10 23:22:06.796917 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:22:06.796926 systemd[1]: Detected virtualization kvm. Sep 10 23:22:06.796933 systemd[1]: Detected architecture arm64. Sep 10 23:22:06.796940 systemd[1]: Running in initrd. Sep 10 23:22:06.796947 systemd[1]: No hostname configured, using default hostname. Sep 10 23:22:06.796957 systemd[1]: Hostname set to . Sep 10 23:22:06.796964 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:22:06.796972 systemd[1]: Queued start job for default target initrd.target. Sep 10 23:22:06.796979 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:22:06.796987 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:22:06.796995 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 10 23:22:06.797002 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:22:06.797010 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 10 23:22:06.797020 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 10 23:22:06.797028 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 10 23:22:06.797036 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 10 23:22:06.797044 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:22:06.797051 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:22:06.797059 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:22:06.797066 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:22:06.797075 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:22:06.797083 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:22:06.797090 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:22:06.797097 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:22:06.797105 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 10 23:22:06.797113 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 10 23:22:06.797120 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:22:06.797128 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:22:06.797147 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:22:06.797156 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:22:06.797163 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 10 23:22:06.797171 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:22:06.797178 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 10 23:22:06.797186 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 10 23:22:06.797194 systemd[1]: Starting systemd-fsck-usr.service... Sep 10 23:22:06.797202 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:22:06.797289 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:22:06.797361 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:22:06.797371 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 10 23:22:06.797380 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:22:06.797387 systemd[1]: Finished systemd-fsck-usr.service. Sep 10 23:22:06.797398 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 10 23:22:06.797406 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:22:06.797413 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 10 23:22:06.797476 systemd-journald[244]: Collecting audit messages is disabled. Sep 10 23:22:06.797502 kernel: Bridge firewalling registered Sep 10 23:22:06.797511 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 10 23:22:06.797519 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:22:06.797528 systemd-journald[244]: Journal started Sep 10 23:22:06.797546 systemd-journald[244]: Runtime Journal (/run/log/journal/ecf754feb0c1413aab6199f05c460782) is 6M, max 48.5M, 42.4M free. Sep 10 23:22:06.762181 systemd-modules-load[247]: Inserted module 'overlay' Sep 10 23:22:06.792296 systemd-modules-load[247]: Inserted module 'br_netfilter' Sep 10 23:22:06.800534 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:22:06.808312 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 10 23:22:06.811532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:22:06.813270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:22:06.817278 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:22:06.822115 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:22:06.825451 systemd-tmpfiles[275]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 10 23:22:06.825825 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:22:06.829191 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:22:06.830445 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:22:06.833348 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 10 23:22:06.835474 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:22:06.867846 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fa1cdbdcf235a334637eb5be2b0973f49e389ed29b057fae47365cdb3976f114 Sep 10 23:22:06.882303 systemd-resolved[291]: Positive Trust Anchors: Sep 10 23:22:06.882322 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:22:06.882355 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:22:06.887545 systemd-resolved[291]: Defaulting to hostname 'linux'. Sep 10 23:22:06.888531 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:22:06.892631 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:22:06.948187 kernel: SCSI subsystem initialized Sep 10 23:22:06.954156 kernel: Loading iSCSI transport class v2.0-870. Sep 10 23:22:06.963163 kernel: iscsi: registered transport (tcp) Sep 10 23:22:06.976183 kernel: iscsi: registered transport (qla4xxx) Sep 10 23:22:06.976246 kernel: QLogic iSCSI HBA Driver Sep 10 23:22:06.994020 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:22:07.011038 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:22:07.013044 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:22:07.061569 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 10 23:22:07.063780 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 10 23:22:07.121180 kernel: raid6: neonx8 gen() 15780 MB/s Sep 10 23:22:07.138189 kernel: raid6: neonx4 gen() 15777 MB/s Sep 10 23:22:07.155179 kernel: raid6: neonx2 gen() 13259 MB/s Sep 10 23:22:07.172182 kernel: raid6: neonx1 gen() 10391 MB/s Sep 10 23:22:07.189181 kernel: raid6: int64x8 gen() 6807 MB/s Sep 10 23:22:07.206185 kernel: raid6: int64x4 gen() 7010 MB/s Sep 10 23:22:07.223186 kernel: raid6: int64x2 gen() 6101 MB/s Sep 10 23:22:07.240181 kernel: raid6: int64x1 gen() 5047 MB/s Sep 10 23:22:07.240251 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Sep 10 23:22:07.257177 kernel: raid6: .... xor() 11976 MB/s, rmw enabled Sep 10 23:22:07.257235 kernel: raid6: using neon recovery algorithm Sep 10 23:22:07.262547 kernel: xor: measuring software checksum speed Sep 10 23:22:07.262585 kernel: 8regs : 21613 MB/sec Sep 10 23:22:07.263157 kernel: 32regs : 21687 MB/sec Sep 10 23:22:07.264158 kernel: arm64_neon : 24892 MB/sec Sep 10 23:22:07.264177 kernel: xor: using function: arm64_neon (24892 MB/sec) Sep 10 23:22:07.317178 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 10 23:22:07.323651 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:22:07.325962 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:22:07.354227 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 10 23:22:07.358550 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:22:07.360866 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 10 23:22:07.381875 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 10 23:22:07.411052 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:22:07.413288 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:22:07.465174 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:22:07.469438 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 10 23:22:07.522259 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 10 23:22:07.522467 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 10 23:22:07.533892 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:22:07.534034 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:22:07.539840 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 10 23:22:07.539872 kernel: GPT:9289727 != 19775487 Sep 10 23:22:07.539881 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 10 23:22:07.539890 kernel: GPT:9289727 != 19775487 Sep 10 23:22:07.539898 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 10 23:22:07.539907 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:22:07.539423 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:22:07.543396 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:22:07.573607 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 10 23:22:07.579812 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 10 23:22:07.580976 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:22:07.590265 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 10 23:22:07.597774 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:22:07.603789 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 10 23:22:07.604776 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 10 23:22:07.607129 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:22:07.608987 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:22:07.610721 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:22:07.613126 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 10 23:22:07.614686 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 10 23:22:07.641497 disk-uuid[592]: Primary Header is updated. Sep 10 23:22:07.641497 disk-uuid[592]: Secondary Entries is updated. Sep 10 23:22:07.641497 disk-uuid[592]: Secondary Header is updated. Sep 10 23:22:07.645601 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:22:07.647850 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:22:07.652150 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:22:08.652168 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 10 23:22:08.654576 disk-uuid[596]: The operation has completed successfully. Sep 10 23:22:08.678376 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 10 23:22:08.678473 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 10 23:22:08.704605 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 10 23:22:08.733186 sh[612]: Success Sep 10 23:22:08.748174 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 10 23:22:08.748235 kernel: device-mapper: uevent: version 1.0.3 Sep 10 23:22:08.748246 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 10 23:22:08.763170 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 10 23:22:08.794683 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 10 23:22:08.800238 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 10 23:22:08.811650 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 10 23:22:08.816160 kernel: BTRFS: device fsid 9579753c-128c-4fc3-99bd-ee6c9d1a9b4e devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (624) Sep 10 23:22:08.816193 kernel: BTRFS info (device dm-0): first mount of filesystem 9579753c-128c-4fc3-99bd-ee6c9d1a9b4e Sep 10 23:22:08.817952 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:22:08.821472 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 10 23:22:08.821516 kernel: BTRFS info (device dm-0): enabling free space tree Sep 10 23:22:08.822565 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 10 23:22:08.823676 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:22:08.824934 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 10 23:22:08.825711 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 10 23:22:08.827222 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 10 23:22:08.850046 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Sep 10 23:22:08.850084 kernel: BTRFS info (device vda6): first mount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:22:08.850095 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:22:08.853167 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:22:08.853208 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:22:08.857185 kernel: BTRFS info (device vda6): last unmount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:22:08.858675 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 10 23:22:08.860504 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 10 23:22:08.927725 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:22:08.932426 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:22:08.960048 ignition[700]: Ignition 2.21.0 Sep 10 23:22:08.960063 ignition[700]: Stage: fetch-offline Sep 10 23:22:08.960096 ignition[700]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:08.960104 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:08.960272 ignition[700]: parsed url from cmdline: "" Sep 10 23:22:08.960275 ignition[700]: no config URL provided Sep 10 23:22:08.960280 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Sep 10 23:22:08.960287 ignition[700]: no config at "/usr/lib/ignition/user.ign" Sep 10 23:22:08.960305 ignition[700]: op(1): [started] loading QEMU firmware config module Sep 10 23:22:08.960311 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 10 23:22:08.965157 ignition[700]: op(1): [finished] loading QEMU firmware config module Sep 10 23:22:08.979748 systemd-networkd[802]: lo: Link UP Sep 10 23:22:08.979760 systemd-networkd[802]: lo: Gained carrier Sep 10 23:22:08.980551 systemd-networkd[802]: Enumeration completed Sep 10 23:22:08.980689 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:22:08.982297 systemd[1]: Reached target network.target - Network. Sep 10 23:22:08.984104 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:22:08.984107 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:22:08.984940 systemd-networkd[802]: eth0: Link UP Sep 10 23:22:08.985269 systemd-networkd[802]: eth0: Gained carrier Sep 10 23:22:08.985278 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:22:09.014187 systemd-networkd[802]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:22:09.019623 ignition[700]: parsing config with SHA512: a680f75dd9d515b40307365c082592d34f1dd3c9b079c1405a6df09e9fecced429f01b1590d3e644584f7fb7ae5b085a435df0dd379b85f1e7379d8abdf67d55 Sep 10 23:22:09.025780 unknown[700]: fetched base config from "system" Sep 10 23:22:09.025794 unknown[700]: fetched user config from "qemu" Sep 10 23:22:09.026271 ignition[700]: fetch-offline: fetch-offline passed Sep 10 23:22:09.026328 ignition[700]: Ignition finished successfully Sep 10 23:22:09.028853 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:22:09.031608 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 10 23:22:09.032437 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 10 23:22:09.060973 ignition[809]: Ignition 2.21.0 Sep 10 23:22:09.060987 ignition[809]: Stage: kargs Sep 10 23:22:09.061183 ignition[809]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:09.061192 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:09.063358 ignition[809]: kargs: kargs passed Sep 10 23:22:09.063403 ignition[809]: Ignition finished successfully Sep 10 23:22:09.066128 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 10 23:22:09.068040 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 10 23:22:09.099615 ignition[817]: Ignition 2.21.0 Sep 10 23:22:09.099631 ignition[817]: Stage: disks Sep 10 23:22:09.099780 ignition[817]: no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:09.099789 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:09.101358 ignition[817]: disks: disks passed Sep 10 23:22:09.103477 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 10 23:22:09.101412 ignition[817]: Ignition finished successfully Sep 10 23:22:09.104598 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 10 23:22:09.106096 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 10 23:22:09.107714 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:22:09.109329 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:22:09.110938 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:22:09.113295 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 10 23:22:09.146960 systemd-fsck[827]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 10 23:22:09.152048 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 10 23:22:09.155587 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 10 23:22:09.233160 kernel: EXT4-fs (vda9): mounted filesystem e1f6153c-c458-4b1b-a85a-9d30297a863a r/w with ordered data mode. Quota mode: none. Sep 10 23:22:09.233479 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 10 23:22:09.234599 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 10 23:22:09.239529 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:22:09.242701 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 10 23:22:09.243573 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 10 23:22:09.243617 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 10 23:22:09.243643 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:22:09.252012 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 10 23:22:09.255445 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 10 23:22:09.260021 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (835) Sep 10 23:22:09.260052 kernel: BTRFS info (device vda6): first mount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:22:09.260063 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:22:09.265740 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:22:09.265800 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:22:09.267608 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:22:09.296076 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Sep 10 23:22:09.300704 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Sep 10 23:22:09.303710 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Sep 10 23:22:09.310534 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Sep 10 23:22:09.388519 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 10 23:22:09.391073 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 10 23:22:09.392629 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 10 23:22:09.411170 kernel: BTRFS info (device vda6): last unmount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:22:09.427299 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 10 23:22:09.442570 ignition[949]: INFO : Ignition 2.21.0 Sep 10 23:22:09.442570 ignition[949]: INFO : Stage: mount Sep 10 23:22:09.442570 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:09.442570 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:09.445857 ignition[949]: INFO : mount: mount passed Sep 10 23:22:09.445857 ignition[949]: INFO : Ignition finished successfully Sep 10 23:22:09.446960 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 10 23:22:09.449072 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 10 23:22:09.815918 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 10 23:22:09.817531 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 10 23:22:09.863028 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (962) Sep 10 23:22:09.863081 kernel: BTRFS info (device vda6): first mount of filesystem 3ae7220e-23eb-4db6-8e25-d26e17ea4ea4 Sep 10 23:22:09.863092 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 10 23:22:09.869152 kernel: BTRFS info (device vda6): turning on async discard Sep 10 23:22:09.869203 kernel: BTRFS info (device vda6): enabling free space tree Sep 10 23:22:09.870401 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 10 23:22:09.897525 ignition[979]: INFO : Ignition 2.21.0 Sep 10 23:22:09.897525 ignition[979]: INFO : Stage: files Sep 10 23:22:09.900236 ignition[979]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:09.900236 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:09.900236 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Sep 10 23:22:09.903202 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 10 23:22:09.903202 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 10 23:22:09.905187 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 10 23:22:09.905187 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 10 23:22:09.905187 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 10 23:22:09.904971 unknown[979]: wrote ssh authorized keys file for user: core Sep 10 23:22:09.909454 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 23:22:09.909454 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 10 23:22:09.947118 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 10 23:22:10.165287 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 10 23:22:10.165287 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 10 23:22:10.168803 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 10 23:22:10.168803 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:22:10.168803 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 10 23:22:10.168803 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:22:10.168803 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 10 23:22:10.168803 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:22:10.168803 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 10 23:22:10.168803 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:22:10.168803 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 10 23:22:10.168803 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:22:10.183094 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:22:10.183094 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:22:10.183094 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 10 23:22:10.646591 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 10 23:22:10.704758 systemd-networkd[802]: eth0: Gained IPv6LL Sep 10 23:22:11.173096 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 10 23:22:11.173096 ignition[979]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 10 23:22:11.176537 ignition[979]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:22:11.180154 ignition[979]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 10 23:22:11.180154 ignition[979]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 10 23:22:11.180154 ignition[979]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 10 23:22:11.184206 ignition[979]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:22:11.184206 ignition[979]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 10 23:22:11.184206 ignition[979]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 10 23:22:11.184206 ignition[979]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 10 23:22:11.197703 ignition[979]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:22:11.201450 ignition[979]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 10 23:22:11.204087 ignition[979]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 10 23:22:11.204087 ignition[979]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 10 23:22:11.204087 ignition[979]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 10 23:22:11.204087 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:22:11.204087 ignition[979]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 10 23:22:11.204087 ignition[979]: INFO : files: files passed Sep 10 23:22:11.204087 ignition[979]: INFO : Ignition finished successfully Sep 10 23:22:11.205790 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 10 23:22:11.211282 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 10 23:22:11.215389 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 10 23:22:11.233411 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 10 23:22:11.234684 initrd-setup-root-after-ignition[1008]: grep: /sysroot/oem/oem-release: No such file or directory Sep 10 23:22:11.235182 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 10 23:22:11.238592 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:22:11.240265 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:22:11.240265 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 10 23:22:11.240621 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:22:11.243465 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 10 23:22:11.246272 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 10 23:22:11.277916 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 10 23:22:11.278027 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 10 23:22:11.280068 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 10 23:22:11.281784 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 10 23:22:11.283399 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 10 23:22:11.284206 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 10 23:22:11.310200 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:22:11.312334 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 10 23:22:11.330951 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:22:11.332014 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:22:11.333850 systemd[1]: Stopped target timers.target - Timer Units. Sep 10 23:22:11.335377 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 10 23:22:11.335513 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 10 23:22:11.337565 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 10 23:22:11.339096 systemd[1]: Stopped target basic.target - Basic System. Sep 10 23:22:11.340447 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 10 23:22:11.341848 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 10 23:22:11.343403 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 10 23:22:11.345001 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 10 23:22:11.346948 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 10 23:22:11.348585 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 10 23:22:11.350185 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 10 23:22:11.351950 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 10 23:22:11.353438 systemd[1]: Stopped target swap.target - Swaps. Sep 10 23:22:11.354755 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 10 23:22:11.354881 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 10 23:22:11.356848 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:22:11.358516 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:22:11.360103 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 10 23:22:11.361190 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:22:11.362801 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 10 23:22:11.362905 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 10 23:22:11.365340 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 10 23:22:11.365455 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 10 23:22:11.367179 systemd[1]: Stopped target paths.target - Path Units. Sep 10 23:22:11.368709 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 10 23:22:11.368815 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:22:11.370533 systemd[1]: Stopped target slices.target - Slice Units. Sep 10 23:22:11.371855 systemd[1]: Stopped target sockets.target - Socket Units. Sep 10 23:22:11.373476 systemd[1]: iscsid.socket: Deactivated successfully. Sep 10 23:22:11.373555 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 10 23:22:11.375271 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 10 23:22:11.375343 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 10 23:22:11.376576 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 10 23:22:11.376682 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 10 23:22:11.378164 systemd[1]: ignition-files.service: Deactivated successfully. Sep 10 23:22:11.378259 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 10 23:22:11.380308 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 10 23:22:11.381589 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 10 23:22:11.381707 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:22:11.383991 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 10 23:22:11.384834 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 10 23:22:11.384956 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:22:11.386881 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 10 23:22:11.386978 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 10 23:22:11.391578 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 10 23:22:11.394320 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 10 23:22:11.402088 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 10 23:22:11.407572 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 10 23:22:11.409180 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 10 23:22:11.415316 ignition[1035]: INFO : Ignition 2.21.0 Sep 10 23:22:11.417248 ignition[1035]: INFO : Stage: umount Sep 10 23:22:11.417248 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 10 23:22:11.417248 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 10 23:22:11.419426 ignition[1035]: INFO : umount: umount passed Sep 10 23:22:11.419426 ignition[1035]: INFO : Ignition finished successfully Sep 10 23:22:11.420220 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 10 23:22:11.421635 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 10 23:22:11.422716 systemd[1]: Stopped target network.target - Network. Sep 10 23:22:11.424011 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 10 23:22:11.424082 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 10 23:22:11.425542 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 10 23:22:11.425594 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 10 23:22:11.426901 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 10 23:22:11.426949 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 10 23:22:11.428381 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 10 23:22:11.428420 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 10 23:22:11.429964 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 10 23:22:11.430016 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 10 23:22:11.431657 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 10 23:22:11.433129 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 10 23:22:11.440880 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 10 23:22:11.440986 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 10 23:22:11.444994 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 10 23:22:11.445283 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 10 23:22:11.445319 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:22:11.448574 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 10 23:22:11.448774 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 10 23:22:11.448868 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 10 23:22:11.452821 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 10 23:22:11.453245 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 10 23:22:11.456229 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 10 23:22:11.456269 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:22:11.459056 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 10 23:22:11.464498 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 10 23:22:11.464556 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 10 23:22:11.466219 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 23:22:11.466259 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:22:11.469255 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 10 23:22:11.469295 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 10 23:22:11.470886 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:22:11.474602 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 23:22:11.493882 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 10 23:22:11.494274 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:22:11.495976 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 10 23:22:11.496013 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 10 23:22:11.497654 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 10 23:22:11.497683 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:22:11.499167 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 10 23:22:11.499228 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 10 23:22:11.501594 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 10 23:22:11.501639 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 10 23:22:11.503824 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 10 23:22:11.503871 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 10 23:22:11.507005 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 10 23:22:11.508433 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 10 23:22:11.508493 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:22:11.511058 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 10 23:22:11.511099 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:22:11.513759 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 10 23:22:11.513799 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:22:11.516933 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 10 23:22:11.528295 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 10 23:22:11.533303 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 10 23:22:11.533401 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 10 23:22:11.535253 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 10 23:22:11.537325 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 10 23:22:11.556927 systemd[1]: Switching root. Sep 10 23:22:11.588593 systemd-journald[244]: Journal stopped Sep 10 23:22:12.351224 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 10 23:22:12.351276 kernel: SELinux: policy capability network_peer_controls=1 Sep 10 23:22:12.351292 kernel: SELinux: policy capability open_perms=1 Sep 10 23:22:12.351302 kernel: SELinux: policy capability extended_socket_class=1 Sep 10 23:22:12.351312 kernel: SELinux: policy capability always_check_network=0 Sep 10 23:22:12.351325 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 10 23:22:12.351334 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 10 23:22:12.351347 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 10 23:22:12.351356 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 10 23:22:12.351365 kernel: SELinux: policy capability userspace_initial_context=0 Sep 10 23:22:12.351375 kernel: audit: type=1403 audit(1757546531.763:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 10 23:22:12.351385 systemd[1]: Successfully loaded SELinux policy in 64.940ms. Sep 10 23:22:12.351398 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.495ms. Sep 10 23:22:12.351409 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 10 23:22:12.351426 systemd[1]: Detected virtualization kvm. Sep 10 23:22:12.351482 systemd[1]: Detected architecture arm64. Sep 10 23:22:12.351494 systemd[1]: Detected first boot. Sep 10 23:22:12.351504 systemd[1]: Initializing machine ID from VM UUID. Sep 10 23:22:12.351514 zram_generator::config[1081]: No configuration found. Sep 10 23:22:12.351529 kernel: NET: Registered PF_VSOCK protocol family Sep 10 23:22:12.351542 systemd[1]: Populated /etc with preset unit settings. Sep 10 23:22:12.351553 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 10 23:22:12.351563 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 10 23:22:12.351577 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 10 23:22:12.351588 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 10 23:22:12.351598 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 10 23:22:12.351609 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 10 23:22:12.351619 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 10 23:22:12.351630 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 10 23:22:12.351640 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 10 23:22:12.351650 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 10 23:22:12.351660 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 10 23:22:12.351671 systemd[1]: Created slice user.slice - User and Session Slice. Sep 10 23:22:12.351682 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 10 23:22:12.351693 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 10 23:22:12.351702 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 10 23:22:12.351712 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 10 23:22:12.351731 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 10 23:22:12.351742 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 10 23:22:12.351754 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 10 23:22:12.351764 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 10 23:22:12.351781 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 10 23:22:12.351791 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 10 23:22:12.351801 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 10 23:22:12.351811 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 10 23:22:12.351821 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 10 23:22:12.351832 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 10 23:22:12.351843 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 10 23:22:12.351853 systemd[1]: Reached target slices.target - Slice Units. Sep 10 23:22:12.351864 systemd[1]: Reached target swap.target - Swaps. Sep 10 23:22:12.351876 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 10 23:22:12.351887 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 10 23:22:12.351897 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 10 23:22:12.351910 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 10 23:22:12.351920 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 10 23:22:12.351930 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 10 23:22:12.351940 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 10 23:22:12.351950 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 10 23:22:12.351960 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 10 23:22:12.351972 systemd[1]: Mounting media.mount - External Media Directory... Sep 10 23:22:12.351982 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 10 23:22:12.351996 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 10 23:22:12.352006 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 10 23:22:12.352016 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 10 23:22:12.352026 systemd[1]: Reached target machines.target - Containers. Sep 10 23:22:12.352036 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 10 23:22:12.352046 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:22:12.352058 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 10 23:22:12.352067 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 10 23:22:12.352077 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:22:12.352087 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:22:12.352099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:22:12.352109 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 10 23:22:12.352118 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:22:12.352129 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 10 23:22:12.352214 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 10 23:22:12.352226 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 10 23:22:12.352236 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 10 23:22:12.352245 systemd[1]: Stopped systemd-fsck-usr.service. Sep 10 23:22:12.352256 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:22:12.352266 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 10 23:22:12.352276 kernel: loop: module loaded Sep 10 23:22:12.352285 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 10 23:22:12.352294 kernel: fuse: init (API version 7.41) Sep 10 23:22:12.352305 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 10 23:22:12.352315 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 10 23:22:12.352326 kernel: ACPI: bus type drm_connector registered Sep 10 23:22:12.352336 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 10 23:22:12.352346 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 10 23:22:12.352361 systemd[1]: verity-setup.service: Deactivated successfully. Sep 10 23:22:12.352372 systemd[1]: Stopped verity-setup.service. Sep 10 23:22:12.352405 systemd-journald[1160]: Collecting audit messages is disabled. Sep 10 23:22:12.352426 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 10 23:22:12.352437 systemd-journald[1160]: Journal started Sep 10 23:22:12.352458 systemd-journald[1160]: Runtime Journal (/run/log/journal/ecf754feb0c1413aab6199f05c460782) is 6M, max 48.5M, 42.4M free. Sep 10 23:22:12.143706 systemd[1]: Queued start job for default target multi-user.target. Sep 10 23:22:12.167239 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 10 23:22:12.167653 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 10 23:22:12.355288 systemd[1]: Started systemd-journald.service - Journal Service. Sep 10 23:22:12.355974 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 10 23:22:12.357126 systemd[1]: Mounted media.mount - External Media Directory. Sep 10 23:22:12.358041 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 10 23:22:12.359113 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 10 23:22:12.360202 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 10 23:22:12.363181 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 10 23:22:12.364422 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 10 23:22:12.365592 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 10 23:22:12.365782 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 10 23:22:12.366987 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:22:12.367165 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:22:12.368267 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:22:12.368450 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:22:12.369766 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:22:12.369922 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:22:12.371244 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 10 23:22:12.371399 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 10 23:22:12.372497 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:22:12.372667 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:22:12.374195 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 10 23:22:12.375428 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 10 23:22:12.376690 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 10 23:22:12.378308 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 10 23:22:12.389920 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 10 23:22:12.392325 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 10 23:22:12.394077 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 10 23:22:12.395069 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 10 23:22:12.395100 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 10 23:22:12.396948 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 10 23:22:12.406002 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 10 23:22:12.407358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:22:12.408766 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 10 23:22:12.410536 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 10 23:22:12.411570 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:22:12.412498 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 10 23:22:12.413489 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:22:12.416335 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 23:22:12.418069 systemd-journald[1160]: Time spent on flushing to /var/log/journal/ecf754feb0c1413aab6199f05c460782 is 18.758ms for 884 entries. Sep 10 23:22:12.418069 systemd-journald[1160]: System Journal (/var/log/journal/ecf754feb0c1413aab6199f05c460782) is 8M, max 195.6M, 187.6M free. Sep 10 23:22:12.443955 systemd-journald[1160]: Received client request to flush runtime journal. Sep 10 23:22:12.444004 kernel: loop0: detected capacity change from 0 to 100600 Sep 10 23:22:12.418480 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 10 23:22:12.423421 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 10 23:22:12.431370 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 10 23:22:12.432805 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 10 23:22:12.433920 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 10 23:22:12.442695 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 10 23:22:12.447733 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 10 23:22:12.450000 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 10 23:22:12.452321 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 10 23:22:12.454398 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 10 23:22:12.464165 kernel: loop1: detected capacity change from 0 to 211168 Sep 10 23:22:12.466382 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 23:22:12.482702 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 10 23:22:12.487513 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 10 23:22:12.490599 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 10 23:22:12.502242 kernel: loop2: detected capacity change from 0 to 119320 Sep 10 23:22:12.517971 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Sep 10 23:22:12.517987 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Sep 10 23:22:12.521939 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 10 23:22:12.529158 kernel: loop3: detected capacity change from 0 to 100600 Sep 10 23:22:12.537161 kernel: loop4: detected capacity change from 0 to 211168 Sep 10 23:22:12.543155 kernel: loop5: detected capacity change from 0 to 119320 Sep 10 23:22:12.547472 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 10 23:22:12.547964 (sd-merge)[1222]: Merged extensions into '/usr'. Sep 10 23:22:12.551126 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Sep 10 23:22:12.551173 systemd[1]: Reloading... Sep 10 23:22:12.618207 zram_generator::config[1248]: No configuration found. Sep 10 23:22:12.713909 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 10 23:22:12.754983 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 10 23:22:12.755217 systemd[1]: Reloading finished in 203 ms. Sep 10 23:22:12.769804 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 10 23:22:12.772156 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 10 23:22:12.784353 systemd[1]: Starting ensure-sysext.service... Sep 10 23:22:12.786107 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 10 23:22:12.797545 systemd[1]: Reload requested from client PID 1282 ('systemctl') (unit ensure-sysext.service)... Sep 10 23:22:12.797562 systemd[1]: Reloading... Sep 10 23:22:12.799900 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 10 23:22:12.799933 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 10 23:22:12.800209 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 10 23:22:12.800400 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 10 23:22:12.801086 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 10 23:22:12.801359 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 10 23:22:12.801410 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 10 23:22:12.804292 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:22:12.804304 systemd-tmpfiles[1283]: Skipping /boot Sep 10 23:22:12.810945 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 10 23:22:12.810960 systemd-tmpfiles[1283]: Skipping /boot Sep 10 23:22:12.845164 zram_generator::config[1310]: No configuration found. Sep 10 23:22:12.982107 systemd[1]: Reloading finished in 184 ms. Sep 10 23:22:13.003825 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 10 23:22:13.009432 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 10 23:22:13.018246 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:22:13.020788 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 10 23:22:13.031977 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 10 23:22:13.034903 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 10 23:22:13.039307 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 10 23:22:13.044300 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 10 23:22:13.055176 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 10 23:22:13.059105 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:22:13.062346 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:22:13.064525 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:22:13.066805 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:22:13.068225 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:22:13.068646 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:22:13.071843 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 10 23:22:13.073996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:22:13.074168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:22:13.076536 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:22:13.076832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:22:13.081790 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Sep 10 23:22:13.082505 augenrules[1376]: No rules Sep 10 23:22:13.084026 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:22:13.085426 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:22:13.090474 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:22:13.090760 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:22:13.098069 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 10 23:22:13.099811 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 10 23:22:13.101616 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 10 23:22:13.103256 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 10 23:22:13.110456 systemd[1]: Finished ensure-sysext.service. Sep 10 23:22:13.115524 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:22:13.116371 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 10 23:22:13.118414 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 10 23:22:13.122169 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 10 23:22:13.126158 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 10 23:22:13.131465 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 10 23:22:13.134330 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 10 23:22:13.134380 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 10 23:22:13.141172 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 10 23:22:13.145903 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 10 23:22:13.147734 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 10 23:22:13.149290 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 10 23:22:13.149810 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 10 23:22:13.151217 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 10 23:22:13.175874 augenrules[1411]: /sbin/augenrules: No change Sep 10 23:22:13.180951 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 10 23:22:13.181178 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 10 23:22:13.186601 augenrules[1446]: No rules Sep 10 23:22:13.188964 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:22:13.189643 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:22:13.191500 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 10 23:22:13.191829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 10 23:22:13.194887 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 10 23:22:13.195242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 10 23:22:13.197339 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 10 23:22:13.206610 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 10 23:22:13.219292 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 10 23:22:13.219368 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 10 23:22:13.249094 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 10 23:22:13.252008 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 10 23:22:13.268502 systemd-resolved[1350]: Positive Trust Anchors: Sep 10 23:22:13.268521 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 10 23:22:13.268554 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 10 23:22:13.279126 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 10 23:22:13.281773 systemd-resolved[1350]: Defaulting to hostname 'linux'. Sep 10 23:22:13.283461 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 10 23:22:13.284664 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 10 23:22:13.291563 systemd-networkd[1425]: lo: Link UP Sep 10 23:22:13.291571 systemd-networkd[1425]: lo: Gained carrier Sep 10 23:22:13.292978 systemd-networkd[1425]: Enumeration completed Sep 10 23:22:13.293115 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 10 23:22:13.293485 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:22:13.293494 systemd-networkd[1425]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 10 23:22:13.294128 systemd-networkd[1425]: eth0: Link UP Sep 10 23:22:13.294259 systemd-networkd[1425]: eth0: Gained carrier Sep 10 23:22:13.294284 systemd-networkd[1425]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 10 23:22:13.294597 systemd[1]: Reached target network.target - Network. Sep 10 23:22:13.297425 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 10 23:22:13.300324 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 10 23:22:13.301568 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 10 23:22:13.302757 systemd[1]: Reached target sysinit.target - System Initialization. Sep 10 23:22:13.304019 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 10 23:22:13.305184 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 10 23:22:13.306192 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 10 23:22:13.307613 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 10 23:22:13.307642 systemd[1]: Reached target paths.target - Path Units. Sep 10 23:22:13.308229 systemd-networkd[1425]: eth0: DHCPv4 address 10.0.0.24/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 10 23:22:13.308368 systemd[1]: Reached target time-set.target - System Time Set. Sep 10 23:22:13.308892 systemd-timesyncd[1427]: Network configuration changed, trying to establish connection. Sep 10 23:22:13.309749 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 10 23:22:13.310059 systemd-timesyncd[1427]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 10 23:22:13.310102 systemd-timesyncd[1427]: Initial clock synchronization to Wed 2025-09-10 23:22:13.690458 UTC. Sep 10 23:22:13.310952 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 10 23:22:13.312365 systemd[1]: Reached target timers.target - Timer Units. Sep 10 23:22:13.313946 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 10 23:22:13.316846 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 10 23:22:13.320318 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 10 23:22:13.321571 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 10 23:22:13.322648 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 10 23:22:13.333274 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 10 23:22:13.334899 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 10 23:22:13.337454 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 10 23:22:13.338539 systemd[1]: Reached target sockets.target - Socket Units. Sep 10 23:22:13.340292 systemd[1]: Reached target basic.target - Basic System. Sep 10 23:22:13.341263 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:22:13.341288 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 10 23:22:13.344292 systemd[1]: Starting containerd.service - containerd container runtime... Sep 10 23:22:13.348386 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 10 23:22:13.351940 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 10 23:22:13.355311 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 10 23:22:13.358370 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 10 23:22:13.360215 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 10 23:22:13.361599 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 10 23:22:13.364824 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 10 23:22:13.366852 jq[1488]: false Sep 10 23:22:13.369051 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 10 23:22:13.371300 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 10 23:22:13.376715 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 10 23:22:13.378520 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 10 23:22:13.379365 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 10 23:22:13.383330 systemd[1]: Starting update-engine.service - Update Engine... Sep 10 23:22:13.386360 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 10 23:22:13.389006 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 10 23:22:13.399031 extend-filesystems[1489]: Found /dev/vda6 Sep 10 23:22:13.395562 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 10 23:22:13.396982 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 10 23:22:13.397163 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 10 23:22:13.397426 systemd[1]: motdgen.service: Deactivated successfully. Sep 10 23:22:13.397580 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 10 23:22:13.401658 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 10 23:22:13.405288 jq[1503]: true Sep 10 23:22:13.401882 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 10 23:22:13.411409 extend-filesystems[1489]: Found /dev/vda9 Sep 10 23:22:13.411409 extend-filesystems[1489]: Checking size of /dev/vda9 Sep 10 23:22:13.423093 update_engine[1502]: I20250910 23:22:13.422828 1502 main.cc:92] Flatcar Update Engine starting Sep 10 23:22:13.423485 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 10 23:22:13.425693 jq[1510]: true Sep 10 23:22:13.433201 tar[1508]: linux-arm64/LICENSE Sep 10 23:22:13.433854 tar[1508]: linux-arm64/helm Sep 10 23:22:13.434187 extend-filesystems[1489]: Resized partition /dev/vda9 Sep 10 23:22:13.437585 extend-filesystems[1532]: resize2fs 1.47.2 (1-Jan-2025) Sep 10 23:22:13.437494 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 10 23:22:13.445956 dbus-daemon[1485]: [system] SELinux support is enabled Sep 10 23:22:13.446438 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 10 23:22:13.449037 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 10 23:22:13.449073 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 10 23:22:13.451160 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 10 23:22:13.451264 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 10 23:22:13.451290 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 10 23:22:13.461934 systemd[1]: Started update-engine.service - Update Engine. Sep 10 23:22:13.464237 update_engine[1502]: I20250910 23:22:13.462433 1502 update_check_scheduler.cc:74] Next update check in 8m10s Sep 10 23:22:13.472112 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 10 23:22:13.485653 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 10 23:22:13.518930 extend-filesystems[1532]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 10 23:22:13.518930 extend-filesystems[1532]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 10 23:22:13.518930 extend-filesystems[1532]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 10 23:22:13.526652 extend-filesystems[1489]: Resized filesystem in /dev/vda9 Sep 10 23:22:13.530093 bash[1547]: Updated "/home/core/.ssh/authorized_keys" Sep 10 23:22:13.521775 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 10 23:22:13.523350 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 10 23:22:13.525509 locksmithd[1548]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 10 23:22:13.532176 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 10 23:22:13.539004 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 10 23:22:13.560579 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 10 23:22:13.591378 systemd-logind[1498]: Watching system buttons on /dev/input/event0 (Power Button) Sep 10 23:22:13.592408 systemd-logind[1498]: New seat seat0. Sep 10 23:22:13.595571 systemd[1]: Started systemd-logind.service - User Login Management. Sep 10 23:22:13.635153 containerd[1527]: time="2025-09-10T23:22:13Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 10 23:22:13.636532 containerd[1527]: time="2025-09-10T23:22:13.636492600Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 10 23:22:13.650541 containerd[1527]: time="2025-09-10T23:22:13.650486080Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.64µs" Sep 10 23:22:13.650541 containerd[1527]: time="2025-09-10T23:22:13.650529160Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 10 23:22:13.650541 containerd[1527]: time="2025-09-10T23:22:13.650549800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 10 23:22:13.650729 containerd[1527]: time="2025-09-10T23:22:13.650700760Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 10 23:22:13.650755 containerd[1527]: time="2025-09-10T23:22:13.650732800Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 10 23:22:13.650796 containerd[1527]: time="2025-09-10T23:22:13.650761120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:22:13.651792 containerd[1527]: time="2025-09-10T23:22:13.650811520Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 10 23:22:13.651792 containerd[1527]: time="2025-09-10T23:22:13.650827000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:22:13.651792 containerd[1527]: time="2025-09-10T23:22:13.651180160Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 10 23:22:13.651792 containerd[1527]: time="2025-09-10T23:22:13.651198760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:22:13.651792 containerd[1527]: time="2025-09-10T23:22:13.651210520Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 10 23:22:13.651792 containerd[1527]: time="2025-09-10T23:22:13.651218520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 10 23:22:13.651792 containerd[1527]: time="2025-09-10T23:22:13.651355920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 10 23:22:13.651792 containerd[1527]: time="2025-09-10T23:22:13.651618000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:22:13.651792 containerd[1527]: time="2025-09-10T23:22:13.651704160Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 10 23:22:13.651792 containerd[1527]: time="2025-09-10T23:22:13.651714280Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 10 23:22:13.651792 containerd[1527]: time="2025-09-10T23:22:13.651768400Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 10 23:22:13.652218 containerd[1527]: time="2025-09-10T23:22:13.652181960Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 10 23:22:13.652338 containerd[1527]: time="2025-09-10T23:22:13.652310160Z" level=info msg="metadata content store policy set" policy=shared Sep 10 23:22:13.656888 containerd[1527]: time="2025-09-10T23:22:13.656850280Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 10 23:22:13.656978 containerd[1527]: time="2025-09-10T23:22:13.656915120Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 10 23:22:13.656978 containerd[1527]: time="2025-09-10T23:22:13.656930200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 10 23:22:13.656978 containerd[1527]: time="2025-09-10T23:22:13.656949560Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 10 23:22:13.656978 containerd[1527]: time="2025-09-10T23:22:13.656963200Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 10 23:22:13.656978 containerd[1527]: time="2025-09-10T23:22:13.656975680Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 10 23:22:13.657061 containerd[1527]: time="2025-09-10T23:22:13.656987640Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 10 23:22:13.657061 containerd[1527]: time="2025-09-10T23:22:13.656999520Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 10 23:22:13.657061 containerd[1527]: time="2025-09-10T23:22:13.657009400Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 10 23:22:13.657061 containerd[1527]: time="2025-09-10T23:22:13.657018600Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 10 23:22:13.657061 containerd[1527]: time="2025-09-10T23:22:13.657027800Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 10 23:22:13.657061 containerd[1527]: time="2025-09-10T23:22:13.657039520Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 10 23:22:13.657220 containerd[1527]: time="2025-09-10T23:22:13.657168200Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 10 23:22:13.657220 containerd[1527]: time="2025-09-10T23:22:13.657194520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 10 23:22:13.657220 containerd[1527]: time="2025-09-10T23:22:13.657211760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 10 23:22:13.657278 containerd[1527]: time="2025-09-10T23:22:13.657223680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 10 23:22:13.657278 containerd[1527]: time="2025-09-10T23:22:13.657240760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 10 23:22:13.657278 containerd[1527]: time="2025-09-10T23:22:13.657251640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 10 23:22:13.657278 containerd[1527]: time="2025-09-10T23:22:13.657264840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 10 23:22:13.657278 containerd[1527]: time="2025-09-10T23:22:13.657277160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 10 23:22:13.657368 containerd[1527]: time="2025-09-10T23:22:13.657292040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 10 23:22:13.657368 containerd[1527]: time="2025-09-10T23:22:13.657305000Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 10 23:22:13.657368 containerd[1527]: time="2025-09-10T23:22:13.657316080Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 10 23:22:13.657545 containerd[1527]: time="2025-09-10T23:22:13.657509240Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 10 23:22:13.657545 containerd[1527]: time="2025-09-10T23:22:13.657533480Z" level=info msg="Start snapshots syncer" Sep 10 23:22:13.657756 containerd[1527]: time="2025-09-10T23:22:13.657559040Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 10 23:22:13.657831 containerd[1527]: time="2025-09-10T23:22:13.657770040Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 10 23:22:13.657831 containerd[1527]: time="2025-09-10T23:22:13.657826840Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 10 23:22:13.657951 containerd[1527]: time="2025-09-10T23:22:13.657890200Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 10 23:22:13.658158 containerd[1527]: time="2025-09-10T23:22:13.657987920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 10 23:22:13.658158 containerd[1527]: time="2025-09-10T23:22:13.658018080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 10 23:22:13.658158 containerd[1527]: time="2025-09-10T23:22:13.658029400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 10 23:22:13.658158 containerd[1527]: time="2025-09-10T23:22:13.658040960Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 10 23:22:13.658158 containerd[1527]: time="2025-09-10T23:22:13.658053000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 10 23:22:13.658158 containerd[1527]: time="2025-09-10T23:22:13.658068960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 10 23:22:13.658158 containerd[1527]: time="2025-09-10T23:22:13.658080200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 10 23:22:13.658158 containerd[1527]: time="2025-09-10T23:22:13.658103760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 10 23:22:13.658158 containerd[1527]: time="2025-09-10T23:22:13.658115000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 10 23:22:13.658158 containerd[1527]: time="2025-09-10T23:22:13.658125440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 10 23:22:13.658365 containerd[1527]: time="2025-09-10T23:22:13.658192360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:22:13.658365 containerd[1527]: time="2025-09-10T23:22:13.658208360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 10 23:22:13.658365 containerd[1527]: time="2025-09-10T23:22:13.658217400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:22:13.658365 containerd[1527]: time="2025-09-10T23:22:13.658226800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 10 23:22:13.658365 containerd[1527]: time="2025-09-10T23:22:13.658234200Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 10 23:22:13.658365 containerd[1527]: time="2025-09-10T23:22:13.658243080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 10 23:22:13.658365 containerd[1527]: time="2025-09-10T23:22:13.658253040Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 10 23:22:13.658365 containerd[1527]: time="2025-09-10T23:22:13.658332960Z" level=info msg="runtime interface created" Sep 10 23:22:13.658365 containerd[1527]: time="2025-09-10T23:22:13.658337880Z" level=info msg="created NRI interface" Sep 10 23:22:13.658365 containerd[1527]: time="2025-09-10T23:22:13.658346720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 10 23:22:13.658365 containerd[1527]: time="2025-09-10T23:22:13.658358600Z" level=info msg="Connect containerd service" Sep 10 23:22:13.658539 containerd[1527]: time="2025-09-10T23:22:13.658384520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 10 23:22:13.660456 containerd[1527]: time="2025-09-10T23:22:13.660422360Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:22:13.732990 containerd[1527]: time="2025-09-10T23:22:13.732941880Z" level=info msg="Start subscribing containerd event" Sep 10 23:22:13.733116 containerd[1527]: time="2025-09-10T23:22:13.733009640Z" level=info msg="Start recovering state" Sep 10 23:22:13.733296 containerd[1527]: time="2025-09-10T23:22:13.733276360Z" level=info msg="Start event monitor" Sep 10 23:22:13.733342 containerd[1527]: time="2025-09-10T23:22:13.733307240Z" level=info msg="Start cni network conf syncer for default" Sep 10 23:22:13.733342 containerd[1527]: time="2025-09-10T23:22:13.733317560Z" level=info msg="Start streaming server" Sep 10 23:22:13.733342 containerd[1527]: time="2025-09-10T23:22:13.733326680Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 10 23:22:13.733342 containerd[1527]: time="2025-09-10T23:22:13.733334280Z" level=info msg="runtime interface starting up..." Sep 10 23:22:13.733342 containerd[1527]: time="2025-09-10T23:22:13.733339600Z" level=info msg="starting plugins..." Sep 10 23:22:13.733743 containerd[1527]: time="2025-09-10T23:22:13.733705760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 10 23:22:13.733783 containerd[1527]: time="2025-09-10T23:22:13.733769040Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 10 23:22:13.736194 containerd[1527]: time="2025-09-10T23:22:13.736165880Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 10 23:22:13.737934 containerd[1527]: time="2025-09-10T23:22:13.736319640Z" level=info msg="containerd successfully booted in 0.101935s" Sep 10 23:22:13.736415 systemd[1]: Started containerd.service - containerd container runtime. Sep 10 23:22:13.789241 tar[1508]: linux-arm64/README.md Sep 10 23:22:13.810199 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 10 23:22:14.207583 sshd_keygen[1521]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 10 23:22:14.227472 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 10 23:22:14.230783 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 10 23:22:14.253109 systemd[1]: issuegen.service: Deactivated successfully. Sep 10 23:22:14.253401 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 10 23:22:14.256923 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 10 23:22:14.286486 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 10 23:22:14.290770 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 10 23:22:14.294472 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 10 23:22:14.295684 systemd[1]: Reached target getty.target - Login Prompts. Sep 10 23:22:15.186312 systemd-networkd[1425]: eth0: Gained IPv6LL Sep 10 23:22:15.188741 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 10 23:22:15.190289 systemd[1]: Reached target network-online.target - Network is Online. Sep 10 23:22:15.194402 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 10 23:22:15.197046 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:15.202544 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 10 23:22:15.216301 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 10 23:22:15.216561 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 10 23:22:15.217894 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 10 23:22:15.222247 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 10 23:22:15.767830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:15.769364 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 10 23:22:15.772396 systemd[1]: Startup finished in 2.021s (kernel) + 5.147s (initrd) + 4.074s (userspace) = 11.244s. Sep 10 23:22:15.775900 (kubelet)[1626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:22:16.129188 kubelet[1626]: E0910 23:22:16.129043 1626 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:22:16.131339 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:22:16.131483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:22:16.133257 systemd[1]: kubelet.service: Consumed 746ms CPU time, 258.4M memory peak. Sep 10 23:22:19.979674 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 10 23:22:19.980681 systemd[1]: Started sshd@0-10.0.0.24:22-10.0.0.1:51176.service - OpenSSH per-connection server daemon (10.0.0.1:51176). Sep 10 23:22:20.035245 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 51176 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:20.037101 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:20.042996 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 10 23:22:20.043871 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 10 23:22:20.049053 systemd-logind[1498]: New session 1 of user core. Sep 10 23:22:20.065410 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 10 23:22:20.067642 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 10 23:22:20.083036 (systemd)[1644]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 10 23:22:20.085246 systemd-logind[1498]: New session c1 of user core. Sep 10 23:22:20.187350 systemd[1644]: Queued start job for default target default.target. Sep 10 23:22:20.206246 systemd[1644]: Created slice app.slice - User Application Slice. Sep 10 23:22:20.206275 systemd[1644]: Reached target paths.target - Paths. Sep 10 23:22:20.206316 systemd[1644]: Reached target timers.target - Timers. Sep 10 23:22:20.207567 systemd[1644]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 10 23:22:20.217166 systemd[1644]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 10 23:22:20.217227 systemd[1644]: Reached target sockets.target - Sockets. Sep 10 23:22:20.217267 systemd[1644]: Reached target basic.target - Basic System. Sep 10 23:22:20.217300 systemd[1644]: Reached target default.target - Main User Target. Sep 10 23:22:20.217325 systemd[1644]: Startup finished in 127ms. Sep 10 23:22:20.217436 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 10 23:22:20.218860 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 10 23:22:20.289535 systemd[1]: Started sshd@1-10.0.0.24:22-10.0.0.1:51186.service - OpenSSH per-connection server daemon (10.0.0.1:51186). Sep 10 23:22:20.337388 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 51186 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:20.338655 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:20.343235 systemd-logind[1498]: New session 2 of user core. Sep 10 23:22:20.359344 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 10 23:22:20.411481 sshd[1658]: Connection closed by 10.0.0.1 port 51186 Sep 10 23:22:20.411952 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:20.423039 systemd[1]: sshd@1-10.0.0.24:22-10.0.0.1:51186.service: Deactivated successfully. Sep 10 23:22:20.424456 systemd[1]: session-2.scope: Deactivated successfully. Sep 10 23:22:20.426236 systemd-logind[1498]: Session 2 logged out. Waiting for processes to exit. Sep 10 23:22:20.427851 systemd[1]: Started sshd@2-10.0.0.24:22-10.0.0.1:51202.service - OpenSSH per-connection server daemon (10.0.0.1:51202). Sep 10 23:22:20.428682 systemd-logind[1498]: Removed session 2. Sep 10 23:22:20.485035 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 51202 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:20.486234 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:20.490039 systemd-logind[1498]: New session 3 of user core. Sep 10 23:22:20.500310 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 10 23:22:20.548880 sshd[1667]: Connection closed by 10.0.0.1 port 51202 Sep 10 23:22:20.549224 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:20.565185 systemd[1]: sshd@2-10.0.0.24:22-10.0.0.1:51202.service: Deactivated successfully. Sep 10 23:22:20.566639 systemd[1]: session-3.scope: Deactivated successfully. Sep 10 23:22:20.567256 systemd-logind[1498]: Session 3 logged out. Waiting for processes to exit. Sep 10 23:22:20.569405 systemd[1]: Started sshd@3-10.0.0.24:22-10.0.0.1:51216.service - OpenSSH per-connection server daemon (10.0.0.1:51216). Sep 10 23:22:20.569847 systemd-logind[1498]: Removed session 3. Sep 10 23:22:20.623182 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 51216 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:20.624358 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:20.628041 systemd-logind[1498]: New session 4 of user core. Sep 10 23:22:20.637306 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 10 23:22:20.689486 sshd[1676]: Connection closed by 10.0.0.1 port 51216 Sep 10 23:22:20.690067 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:20.699029 systemd[1]: sshd@3-10.0.0.24:22-10.0.0.1:51216.service: Deactivated successfully. Sep 10 23:22:20.700443 systemd[1]: session-4.scope: Deactivated successfully. Sep 10 23:22:20.701072 systemd-logind[1498]: Session 4 logged out. Waiting for processes to exit. Sep 10 23:22:20.703116 systemd[1]: Started sshd@4-10.0.0.24:22-10.0.0.1:51220.service - OpenSSH per-connection server daemon (10.0.0.1:51220). Sep 10 23:22:20.703732 systemd-logind[1498]: Removed session 4. Sep 10 23:22:20.749939 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 51220 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:20.751138 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:20.755059 systemd-logind[1498]: New session 5 of user core. Sep 10 23:22:20.767330 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 10 23:22:20.824423 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 10 23:22:20.824710 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:22:20.850087 sudo[1686]: pam_unix(sudo:session): session closed for user root Sep 10 23:22:20.851814 sshd[1685]: Connection closed by 10.0.0.1 port 51220 Sep 10 23:22:20.852012 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:20.864275 systemd[1]: sshd@4-10.0.0.24:22-10.0.0.1:51220.service: Deactivated successfully. Sep 10 23:22:20.865764 systemd[1]: session-5.scope: Deactivated successfully. Sep 10 23:22:20.866577 systemd-logind[1498]: Session 5 logged out. Waiting for processes to exit. Sep 10 23:22:20.868879 systemd[1]: Started sshd@5-10.0.0.24:22-10.0.0.1:51226.service - OpenSSH per-connection server daemon (10.0.0.1:51226). Sep 10 23:22:20.869327 systemd-logind[1498]: Removed session 5. Sep 10 23:22:20.928910 sshd[1692]: Accepted publickey for core from 10.0.0.1 port 51226 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:20.930294 sshd-session[1692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:20.935217 systemd-logind[1498]: New session 6 of user core. Sep 10 23:22:20.948630 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 10 23:22:21.001556 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 10 23:22:21.001814 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:22:21.159179 sudo[1697]: pam_unix(sudo:session): session closed for user root Sep 10 23:22:21.164244 sudo[1696]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 10 23:22:21.164501 sudo[1696]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:22:21.173402 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 10 23:22:21.209529 augenrules[1719]: No rules Sep 10 23:22:21.210794 systemd[1]: audit-rules.service: Deactivated successfully. Sep 10 23:22:21.211019 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 10 23:22:21.211992 sudo[1696]: pam_unix(sudo:session): session closed for user root Sep 10 23:22:21.213485 sshd[1695]: Connection closed by 10.0.0.1 port 51226 Sep 10 23:22:21.214112 sshd-session[1692]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:21.233440 systemd[1]: sshd@5-10.0.0.24:22-10.0.0.1:51226.service: Deactivated successfully. Sep 10 23:22:21.236494 systemd[1]: session-6.scope: Deactivated successfully. Sep 10 23:22:21.237247 systemd-logind[1498]: Session 6 logged out. Waiting for processes to exit. Sep 10 23:22:21.239331 systemd[1]: Started sshd@6-10.0.0.24:22-10.0.0.1:51232.service - OpenSSH per-connection server daemon (10.0.0.1:51232). Sep 10 23:22:21.239982 systemd-logind[1498]: Removed session 6. Sep 10 23:22:21.296275 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 51232 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:22:21.297486 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:22:21.301952 systemd-logind[1498]: New session 7 of user core. Sep 10 23:22:21.315349 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 10 23:22:21.370413 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 10 23:22:21.370672 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 10 23:22:21.643980 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 10 23:22:21.665485 (dockerd)[1753]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 10 23:22:21.872857 dockerd[1753]: time="2025-09-10T23:22:21.872502380Z" level=info msg="Starting up" Sep 10 23:22:21.873341 dockerd[1753]: time="2025-09-10T23:22:21.873320629Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 10 23:22:21.883334 dockerd[1753]: time="2025-09-10T23:22:21.883301108Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 10 23:22:21.916227 dockerd[1753]: time="2025-09-10T23:22:21.916112714Z" level=info msg="Loading containers: start." Sep 10 23:22:21.924184 kernel: Initializing XFRM netlink socket Sep 10 23:22:22.111836 systemd-networkd[1425]: docker0: Link UP Sep 10 23:22:22.115854 dockerd[1753]: time="2025-09-10T23:22:22.115804031Z" level=info msg="Loading containers: done." Sep 10 23:22:22.129968 dockerd[1753]: time="2025-09-10T23:22:22.129914596Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 23:22:22.130103 dockerd[1753]: time="2025-09-10T23:22:22.130001311Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 10 23:22:22.130103 dockerd[1753]: time="2025-09-10T23:22:22.130084611Z" level=info msg="Initializing buildkit" Sep 10 23:22:22.152330 dockerd[1753]: time="2025-09-10T23:22:22.152292117Z" level=info msg="Completed buildkit initialization" Sep 10 23:22:22.157133 dockerd[1753]: time="2025-09-10T23:22:22.157093398Z" level=info msg="Daemon has completed initialization" Sep 10 23:22:22.157308 dockerd[1753]: time="2025-09-10T23:22:22.157164299Z" level=info msg="API listen on /run/docker.sock" Sep 10 23:22:22.157356 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 23:22:22.678173 containerd[1527]: time="2025-09-10T23:22:22.678112151Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 10 23:22:23.440823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2252019583.mount: Deactivated successfully. Sep 10 23:22:24.651729 containerd[1527]: time="2025-09-10T23:22:24.651673589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:24.652141 containerd[1527]: time="2025-09-10T23:22:24.652110552Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 10 23:22:24.652950 containerd[1527]: time="2025-09-10T23:22:24.652925062Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:24.655444 containerd[1527]: time="2025-09-10T23:22:24.655386899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:24.656724 containerd[1527]: time="2025-09-10T23:22:24.656456119Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.978287084s" Sep 10 23:22:24.656724 containerd[1527]: time="2025-09-10T23:22:24.656489370Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 10 23:22:24.657710 containerd[1527]: time="2025-09-10T23:22:24.657679241Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 10 23:22:26.031854 containerd[1527]: time="2025-09-10T23:22:26.031790350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:26.032854 containerd[1527]: time="2025-09-10T23:22:26.032814997Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 10 23:22:26.033210 containerd[1527]: time="2025-09-10T23:22:26.033188907Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:26.036353 containerd[1527]: time="2025-09-10T23:22:26.036314540Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:26.037279 containerd[1527]: time="2025-09-10T23:22:26.037233141Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.379519091s" Sep 10 23:22:26.037321 containerd[1527]: time="2025-09-10T23:22:26.037285680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 10 23:22:26.037783 containerd[1527]: time="2025-09-10T23:22:26.037761517Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 10 23:22:26.381939 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 23:22:26.383694 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:26.544492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:26.548111 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:22:26.582501 kubelet[2042]: E0910 23:22:26.582433 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:22:26.585708 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:22:26.585851 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:22:26.586125 systemd[1]: kubelet.service: Consumed 142ms CPU time, 106.1M memory peak. Sep 10 23:22:27.354370 containerd[1527]: time="2025-09-10T23:22:27.354323584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:27.355593 containerd[1527]: time="2025-09-10T23:22:27.355567489Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 10 23:22:27.356495 containerd[1527]: time="2025-09-10T23:22:27.356474875Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:27.358694 containerd[1527]: time="2025-09-10T23:22:27.358660573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:27.360383 containerd[1527]: time="2025-09-10T23:22:27.360347967Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.322556859s" Sep 10 23:22:27.360424 containerd[1527]: time="2025-09-10T23:22:27.360386689Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 10 23:22:27.360932 containerd[1527]: time="2025-09-10T23:22:27.360911776Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 10 23:22:28.351537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2215887672.mount: Deactivated successfully. Sep 10 23:22:28.578696 containerd[1527]: time="2025-09-10T23:22:28.578648777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:28.579799 containerd[1527]: time="2025-09-10T23:22:28.579740853Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 10 23:22:28.580614 containerd[1527]: time="2025-09-10T23:22:28.580453162Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:28.582227 containerd[1527]: time="2025-09-10T23:22:28.582189772Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:28.583253 containerd[1527]: time="2025-09-10T23:22:28.583214920Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.222273104s" Sep 10 23:22:28.583307 containerd[1527]: time="2025-09-10T23:22:28.583255496Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 10 23:22:28.583791 containerd[1527]: time="2025-09-10T23:22:28.583693607Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 10 23:22:29.127239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3400003948.mount: Deactivated successfully. Sep 10 23:22:29.961956 containerd[1527]: time="2025-09-10T23:22:29.961908709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:29.962763 containerd[1527]: time="2025-09-10T23:22:29.962730069Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 10 23:22:29.964160 containerd[1527]: time="2025-09-10T23:22:29.963762375Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:29.966947 containerd[1527]: time="2025-09-10T23:22:29.966909695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:29.968031 containerd[1527]: time="2025-09-10T23:22:29.967987934Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.384252187s" Sep 10 23:22:29.968099 containerd[1527]: time="2025-09-10T23:22:29.968032417Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 10 23:22:29.968500 containerd[1527]: time="2025-09-10T23:22:29.968478061Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 23:22:30.393666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105997369.mount: Deactivated successfully. Sep 10 23:22:30.398186 containerd[1527]: time="2025-09-10T23:22:30.398123353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:22:30.398702 containerd[1527]: time="2025-09-10T23:22:30.398659509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 23:22:30.399563 containerd[1527]: time="2025-09-10T23:22:30.399536408Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:22:30.401994 containerd[1527]: time="2025-09-10T23:22:30.401959948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 23:22:30.402587 containerd[1527]: time="2025-09-10T23:22:30.402560020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 434.053827ms" Sep 10 23:22:30.402635 containerd[1527]: time="2025-09-10T23:22:30.402593326Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 23:22:30.403033 containerd[1527]: time="2025-09-10T23:22:30.403008687Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 10 23:22:30.914836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876431742.mount: Deactivated successfully. Sep 10 23:22:32.852016 containerd[1527]: time="2025-09-10T23:22:32.851962268Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:32.852433 containerd[1527]: time="2025-09-10T23:22:32.852406772Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 10 23:22:32.853476 containerd[1527]: time="2025-09-10T23:22:32.853428066Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:32.856167 containerd[1527]: time="2025-09-10T23:22:32.855784760Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:32.857719 containerd[1527]: time="2025-09-10T23:22:32.857676944Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.454638017s" Sep 10 23:22:32.857719 containerd[1527]: time="2025-09-10T23:22:32.857714746Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 10 23:22:36.836292 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 23:22:36.837728 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:37.002903 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:37.007044 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 23:22:37.045012 kubelet[2206]: E0910 23:22:37.044964 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 23:22:37.047838 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 23:22:37.047980 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 23:22:37.048326 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107.9M memory peak. Sep 10 23:22:38.041826 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:38.041973 systemd[1]: kubelet.service: Consumed 134ms CPU time, 107.9M memory peak. Sep 10 23:22:38.043978 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:38.065458 systemd[1]: Reload requested from client PID 2221 ('systemctl') (unit session-7.scope)... Sep 10 23:22:38.065472 systemd[1]: Reloading... Sep 10 23:22:38.127257 zram_generator::config[2261]: No configuration found. Sep 10 23:22:38.359826 systemd[1]: Reloading finished in 294 ms. Sep 10 23:22:38.430763 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 23:22:38.430858 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 23:22:38.431174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:38.431231 systemd[1]: kubelet.service: Consumed 89ms CPU time, 95.2M memory peak. Sep 10 23:22:38.433074 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:38.541515 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:38.545269 (kubelet)[2308]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:22:38.577291 kubelet[2308]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:22:38.577291 kubelet[2308]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:22:38.577291 kubelet[2308]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:22:38.577632 kubelet[2308]: I0910 23:22:38.577305 2308 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:22:39.527719 kubelet[2308]: I0910 23:22:39.527670 2308 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 23:22:39.527719 kubelet[2308]: I0910 23:22:39.527705 2308 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:22:39.527967 kubelet[2308]: I0910 23:22:39.527943 2308 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 23:22:39.553175 kubelet[2308]: E0910 23:22:39.552863 2308 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.24:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 10 23:22:39.556539 kubelet[2308]: I0910 23:22:39.556515 2308 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:22:39.565909 kubelet[2308]: I0910 23:22:39.565886 2308 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:22:39.568711 kubelet[2308]: I0910 23:22:39.568691 2308 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:22:39.569758 kubelet[2308]: I0910 23:22:39.569709 2308 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:22:39.569973 kubelet[2308]: I0910 23:22:39.569755 2308 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:22:39.570055 kubelet[2308]: I0910 23:22:39.570035 2308 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:22:39.570055 kubelet[2308]: I0910 23:22:39.570044 2308 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 23:22:39.570273 kubelet[2308]: I0910 23:22:39.570260 2308 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:22:39.572634 kubelet[2308]: I0910 23:22:39.572611 2308 kubelet.go:480] "Attempting to sync node with API server" Sep 10 23:22:39.572680 kubelet[2308]: I0910 23:22:39.572639 2308 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:22:39.572680 kubelet[2308]: I0910 23:22:39.572664 2308 kubelet.go:386] "Adding apiserver pod source" Sep 10 23:22:39.573663 kubelet[2308]: I0910 23:22:39.573643 2308 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:22:39.574797 kubelet[2308]: E0910 23:22:39.574747 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.24:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 23:22:39.575449 kubelet[2308]: I0910 23:22:39.575360 2308 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 23:22:39.575449 kubelet[2308]: E0910 23:22:39.575407 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.24:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 23:22:39.576106 kubelet[2308]: I0910 23:22:39.576069 2308 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 23:22:39.576240 kubelet[2308]: W0910 23:22:39.576229 2308 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 23:22:39.579676 kubelet[2308]: I0910 23:22:39.579284 2308 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:22:39.579676 kubelet[2308]: I0910 23:22:39.579335 2308 server.go:1289] "Started kubelet" Sep 10 23:22:39.581861 kubelet[2308]: I0910 23:22:39.581839 2308 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:22:39.582665 kubelet[2308]: I0910 23:22:39.582648 2308 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:22:39.583740 kubelet[2308]: I0910 23:22:39.581853 2308 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:22:39.587035 kubelet[2308]: I0910 23:22:39.581922 2308 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:22:39.587194 kubelet[2308]: E0910 23:22:39.587162 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.24:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 23:22:39.587194 kubelet[2308]: E0910 23:22:39.584196 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:22:39.587194 kubelet[2308]: I0910 23:22:39.585380 2308 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:22:39.587369 kubelet[2308]: I0910 23:22:39.587348 2308 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:22:39.587576 kubelet[2308]: I0910 23:22:39.587559 2308 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:22:39.587930 kubelet[2308]: I0910 23:22:39.587900 2308 server.go:317] "Adding debug handlers to kubelet server" Sep 10 23:22:39.588833 kubelet[2308]: E0910 23:22:39.588799 2308 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:22:39.588934 kubelet[2308]: I0910 23:22:39.587899 2308 factory.go:223] Registration of the systemd container factory successfully Sep 10 23:22:39.589011 kubelet[2308]: I0910 23:22:39.588989 2308 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:22:39.589134 kubelet[2308]: I0910 23:22:39.584008 2308 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:22:39.589773 kubelet[2308]: E0910 23:22:39.589712 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="200ms" Sep 10 23:22:39.589912 kubelet[2308]: I0910 23:22:39.589852 2308 factory.go:223] Registration of the containerd container factory successfully Sep 10 23:22:39.590292 kubelet[2308]: E0910 23:22:39.588948 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.24:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.24:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18640f48b7ef2ee1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:22:39.579303649 +0000 UTC m=+1.030694885,LastTimestamp:2025-09-10 23:22:39.579303649 +0000 UTC m=+1.030694885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:22:39.599161 kubelet[2308]: I0910 23:22:39.598674 2308 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:22:39.599161 kubelet[2308]: I0910 23:22:39.598690 2308 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:22:39.599161 kubelet[2308]: I0910 23:22:39.598706 2308 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:22:39.601594 kubelet[2308]: I0910 23:22:39.601553 2308 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 23:22:39.602399 kubelet[2308]: I0910 23:22:39.602381 2308 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 23:22:39.602436 kubelet[2308]: I0910 23:22:39.602422 2308 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 23:22:39.602458 kubelet[2308]: I0910 23:22:39.602441 2308 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:22:39.602458 kubelet[2308]: I0910 23:22:39.602450 2308 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 23:22:39.602502 kubelet[2308]: E0910 23:22:39.602488 2308 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:22:39.603275 kubelet[2308]: E0910 23:22:39.603246 2308 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.24:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.24:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 23:22:39.687906 kubelet[2308]: E0910 23:22:39.687870 2308 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:22:39.703216 kubelet[2308]: E0910 23:22:39.703186 2308 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 10 23:22:39.708515 kubelet[2308]: I0910 23:22:39.708496 2308 policy_none.go:49] "None policy: Start" Sep 10 23:22:39.708677 kubelet[2308]: I0910 23:22:39.708605 2308 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:22:39.708677 kubelet[2308]: I0910 23:22:39.708626 2308 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:22:39.713705 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 23:22:39.729638 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 23:22:39.732454 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 23:22:39.750936 kubelet[2308]: E0910 23:22:39.750900 2308 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 23:22:39.751113 kubelet[2308]: I0910 23:22:39.751090 2308 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:22:39.751155 kubelet[2308]: I0910 23:22:39.751108 2308 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:22:39.751475 kubelet[2308]: I0910 23:22:39.751330 2308 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:22:39.752274 kubelet[2308]: E0910 23:22:39.752249 2308 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:22:39.752330 kubelet[2308]: E0910 23:22:39.752287 2308 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 23:22:39.790200 kubelet[2308]: E0910 23:22:39.790077 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="400ms" Sep 10 23:22:39.853180 kubelet[2308]: I0910 23:22:39.853129 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:22:39.853543 kubelet[2308]: E0910 23:22:39.853506 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Sep 10 23:22:39.881106 kubelet[2308]: E0910 23:22:39.880993 2308 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.24:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.24:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18640f48b7ef2ee1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 23:22:39.579303649 +0000 UTC m=+1.030694885,LastTimestamp:2025-09-10 23:22:39.579303649 +0000 UTC m=+1.030694885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 23:22:39.912531 systemd[1]: Created slice kubepods-burstable-pod23fb9015749e6fd52d468f784a73f207.slice - libcontainer container kubepods-burstable-pod23fb9015749e6fd52d468f784a73f207.slice. Sep 10 23:22:39.927024 kubelet[2308]: E0910 23:22:39.926990 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:22:39.929594 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 10 23:22:39.931038 kubelet[2308]: E0910 23:22:39.931005 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:22:39.933484 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 10 23:22:39.935619 kubelet[2308]: E0910 23:22:39.935602 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:22:40.056801 kubelet[2308]: I0910 23:22:40.055490 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:22:40.056801 kubelet[2308]: E0910 23:22:40.055938 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Sep 10 23:22:40.090290 kubelet[2308]: I0910 23:22:40.090239 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:40.090290 kubelet[2308]: I0910 23:22:40.090280 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:40.090426 kubelet[2308]: I0910 23:22:40.090307 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:22:40.090426 kubelet[2308]: I0910 23:22:40.090322 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:40.090426 kubelet[2308]: I0910 23:22:40.090337 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:40.090426 kubelet[2308]: I0910 23:22:40.090359 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:40.090426 kubelet[2308]: I0910 23:22:40.090375 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:40.090544 kubelet[2308]: I0910 23:22:40.090390 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:40.090544 kubelet[2308]: I0910 23:22:40.090404 2308 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:40.191325 kubelet[2308]: E0910 23:22:40.191281 2308 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.24:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.24:6443: connect: connection refused" interval="800ms" Sep 10 23:22:40.228142 containerd[1527]: time="2025-09-10T23:22:40.228090552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:23fb9015749e6fd52d468f784a73f207,Namespace:kube-system,Attempt:0,}" Sep 10 23:22:40.232609 containerd[1527]: time="2025-09-10T23:22:40.232567879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 10 23:22:40.236320 containerd[1527]: time="2025-09-10T23:22:40.236155013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 10 23:22:40.254053 containerd[1527]: time="2025-09-10T23:22:40.254003555Z" level=info msg="connecting to shim 6c4fa3afee66232a2b2e92ac09bcfc73473a87e78dea9ab78dd5ab42ef8bb402" address="unix:///run/containerd/s/aaffe985cacfb0efd751e72d5da772c4430833911915c465a28dde85c5dc0bb5" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:40.266216 containerd[1527]: time="2025-09-10T23:22:40.265756300Z" level=info msg="connecting to shim 350e60733a1ff7e8626bd2a929d170a00ad8ed8d9df12a90dfe9613b96fd9dda" address="unix:///run/containerd/s/829945cc892f7735c0810e7ba8bb2de0091a577fee8d05b6d7c85ac74b3089b1" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:40.269583 containerd[1527]: time="2025-09-10T23:22:40.269544411Z" level=info msg="connecting to shim 7353325c691b7bb63a2a8765a3ec4842528169ec475c78699886b0c0ea80ba84" address="unix:///run/containerd/s/68c5a9563c5d7b1a2509b30602c09410d9b3a0e4d0134e6cba90797aea10ec28" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:40.281315 systemd[1]: Started cri-containerd-6c4fa3afee66232a2b2e92ac09bcfc73473a87e78dea9ab78dd5ab42ef8bb402.scope - libcontainer container 6c4fa3afee66232a2b2e92ac09bcfc73473a87e78dea9ab78dd5ab42ef8bb402. Sep 10 23:22:40.296325 systemd[1]: Started cri-containerd-350e60733a1ff7e8626bd2a929d170a00ad8ed8d9df12a90dfe9613b96fd9dda.scope - libcontainer container 350e60733a1ff7e8626bd2a929d170a00ad8ed8d9df12a90dfe9613b96fd9dda. Sep 10 23:22:40.300531 systemd[1]: Started cri-containerd-7353325c691b7bb63a2a8765a3ec4842528169ec475c78699886b0c0ea80ba84.scope - libcontainer container 7353325c691b7bb63a2a8765a3ec4842528169ec475c78699886b0c0ea80ba84. Sep 10 23:22:40.330517 containerd[1527]: time="2025-09-10T23:22:40.330418331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:23fb9015749e6fd52d468f784a73f207,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c4fa3afee66232a2b2e92ac09bcfc73473a87e78dea9ab78dd5ab42ef8bb402\"" Sep 10 23:22:40.336439 containerd[1527]: time="2025-09-10T23:22:40.336407810Z" level=info msg="CreateContainer within sandbox \"6c4fa3afee66232a2b2e92ac09bcfc73473a87e78dea9ab78dd5ab42ef8bb402\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 23:22:40.341017 containerd[1527]: time="2025-09-10T23:22:40.340924076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"350e60733a1ff7e8626bd2a929d170a00ad8ed8d9df12a90dfe9613b96fd9dda\"" Sep 10 23:22:40.343458 containerd[1527]: time="2025-09-10T23:22:40.343423004Z" level=info msg="Container 1dcfed77c90cdf95962fc1ac75b6052b03e8ab716176180800194aac33de3145: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:40.344616 containerd[1527]: time="2025-09-10T23:22:40.344588924Z" level=info msg="CreateContainer within sandbox \"350e60733a1ff7e8626bd2a929d170a00ad8ed8d9df12a90dfe9613b96fd9dda\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 23:22:40.351614 containerd[1527]: time="2025-09-10T23:22:40.351580323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"7353325c691b7bb63a2a8765a3ec4842528169ec475c78699886b0c0ea80ba84\"" Sep 10 23:22:40.354779 containerd[1527]: time="2025-09-10T23:22:40.354733456Z" level=info msg="CreateContainer within sandbox \"7353325c691b7bb63a2a8765a3ec4842528169ec475c78699886b0c0ea80ba84\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 23:22:40.357161 containerd[1527]: time="2025-09-10T23:22:40.356476829Z" level=info msg="Container 8ef7a4f293774c1c7c299ce5a27e85b8e0aebcad06e0cf26418f763f03db6cda: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:40.357161 containerd[1527]: time="2025-09-10T23:22:40.356730483Z" level=info msg="CreateContainer within sandbox \"6c4fa3afee66232a2b2e92ac09bcfc73473a87e78dea9ab78dd5ab42ef8bb402\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1dcfed77c90cdf95962fc1ac75b6052b03e8ab716176180800194aac33de3145\"" Sep 10 23:22:40.357435 containerd[1527]: time="2025-09-10T23:22:40.357351039Z" level=info msg="StartContainer for \"1dcfed77c90cdf95962fc1ac75b6052b03e8ab716176180800194aac33de3145\"" Sep 10 23:22:40.358642 containerd[1527]: time="2025-09-10T23:22:40.358597158Z" level=info msg="connecting to shim 1dcfed77c90cdf95962fc1ac75b6052b03e8ab716176180800194aac33de3145" address="unix:///run/containerd/s/aaffe985cacfb0efd751e72d5da772c4430833911915c465a28dde85c5dc0bb5" protocol=ttrpc version=3 Sep 10 23:22:40.361557 containerd[1527]: time="2025-09-10T23:22:40.361521114Z" level=info msg="Container 080a05ffbed803336adf27ea908fb6cd844cb1ad0d3e4ac0f8d4d8beb5db500d: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:40.366050 containerd[1527]: time="2025-09-10T23:22:40.366006333Z" level=info msg="CreateContainer within sandbox \"350e60733a1ff7e8626bd2a929d170a00ad8ed8d9df12a90dfe9613b96fd9dda\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8ef7a4f293774c1c7c299ce5a27e85b8e0aebcad06e0cf26418f763f03db6cda\"" Sep 10 23:22:40.366603 containerd[1527]: time="2025-09-10T23:22:40.366580941Z" level=info msg="StartContainer for \"8ef7a4f293774c1c7c299ce5a27e85b8e0aebcad06e0cf26418f763f03db6cda\"" Sep 10 23:22:40.368172 containerd[1527]: time="2025-09-10T23:22:40.368126903Z" level=info msg="connecting to shim 8ef7a4f293774c1c7c299ce5a27e85b8e0aebcad06e0cf26418f763f03db6cda" address="unix:///run/containerd/s/829945cc892f7735c0810e7ba8bb2de0091a577fee8d05b6d7c85ac74b3089b1" protocol=ttrpc version=3 Sep 10 23:22:40.368332 containerd[1527]: time="2025-09-10T23:22:40.368301601Z" level=info msg="CreateContainer within sandbox \"7353325c691b7bb63a2a8765a3ec4842528169ec475c78699886b0c0ea80ba84\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"080a05ffbed803336adf27ea908fb6cd844cb1ad0d3e4ac0f8d4d8beb5db500d\"" Sep 10 23:22:40.368791 containerd[1527]: time="2025-09-10T23:22:40.368753467Z" level=info msg="StartContainer for \"080a05ffbed803336adf27ea908fb6cd844cb1ad0d3e4ac0f8d4d8beb5db500d\"" Sep 10 23:22:40.369735 containerd[1527]: time="2025-09-10T23:22:40.369707115Z" level=info msg="connecting to shim 080a05ffbed803336adf27ea908fb6cd844cb1ad0d3e4ac0f8d4d8beb5db500d" address="unix:///run/containerd/s/68c5a9563c5d7b1a2509b30602c09410d9b3a0e4d0134e6cba90797aea10ec28" protocol=ttrpc version=3 Sep 10 23:22:40.383302 systemd[1]: Started cri-containerd-1dcfed77c90cdf95962fc1ac75b6052b03e8ab716176180800194aac33de3145.scope - libcontainer container 1dcfed77c90cdf95962fc1ac75b6052b03e8ab716176180800194aac33de3145. Sep 10 23:22:40.388046 systemd[1]: Started cri-containerd-080a05ffbed803336adf27ea908fb6cd844cb1ad0d3e4ac0f8d4d8beb5db500d.scope - libcontainer container 080a05ffbed803336adf27ea908fb6cd844cb1ad0d3e4ac0f8d4d8beb5db500d. Sep 10 23:22:40.389096 systemd[1]: Started cri-containerd-8ef7a4f293774c1c7c299ce5a27e85b8e0aebcad06e0cf26418f763f03db6cda.scope - libcontainer container 8ef7a4f293774c1c7c299ce5a27e85b8e0aebcad06e0cf26418f763f03db6cda. Sep 10 23:22:40.433243 containerd[1527]: time="2025-09-10T23:22:40.433204827Z" level=info msg="StartContainer for \"1dcfed77c90cdf95962fc1ac75b6052b03e8ab716176180800194aac33de3145\" returns successfully" Sep 10 23:22:40.437636 containerd[1527]: time="2025-09-10T23:22:40.437609768Z" level=info msg="StartContainer for \"8ef7a4f293774c1c7c299ce5a27e85b8e0aebcad06e0cf26418f763f03db6cda\" returns successfully" Sep 10 23:22:40.442344 containerd[1527]: time="2025-09-10T23:22:40.442317676Z" level=info msg="StartContainer for \"080a05ffbed803336adf27ea908fb6cd844cb1ad0d3e4ac0f8d4d8beb5db500d\" returns successfully" Sep 10 23:22:40.458929 kubelet[2308]: I0910 23:22:40.458418 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:22:40.458929 kubelet[2308]: E0910 23:22:40.458781 2308 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.24:6443/api/v1/nodes\": dial tcp 10.0.0.24:6443: connect: connection refused" node="localhost" Sep 10 23:22:40.610636 kubelet[2308]: E0910 23:22:40.609898 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:22:40.611822 kubelet[2308]: E0910 23:22:40.611669 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:22:40.613825 kubelet[2308]: E0910 23:22:40.613804 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:22:41.260742 kubelet[2308]: I0910 23:22:41.260703 2308 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:22:41.617939 kubelet[2308]: E0910 23:22:41.617438 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:22:41.617939 kubelet[2308]: E0910 23:22:41.617522 2308 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 23:22:42.291830 kubelet[2308]: E0910 23:22:42.291790 2308 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 23:22:42.476183 kubelet[2308]: I0910 23:22:42.474627 2308 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:22:42.485075 kubelet[2308]: I0910 23:22:42.484955 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:42.575400 kubelet[2308]: I0910 23:22:42.575288 2308 apiserver.go:52] "Watching apiserver" Sep 10 23:22:42.590128 kubelet[2308]: I0910 23:22:42.590095 2308 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:22:42.609153 kubelet[2308]: E0910 23:22:42.609106 2308 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:42.609364 kubelet[2308]: I0910 23:22:42.609341 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:42.611405 kubelet[2308]: E0910 23:22:42.611384 2308 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:42.611405 kubelet[2308]: I0910 23:22:42.611409 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:22:42.612905 kubelet[2308]: E0910 23:22:42.612882 2308 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 23:22:44.485475 systemd[1]: Reload requested from client PID 2593 ('systemctl') (unit session-7.scope)... Sep 10 23:22:44.485491 systemd[1]: Reloading... Sep 10 23:22:44.557169 zram_generator::config[2636]: No configuration found. Sep 10 23:22:44.641698 kubelet[2308]: I0910 23:22:44.641663 2308 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:44.731608 systemd[1]: Reloading finished in 245 ms. Sep 10 23:22:44.760955 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:44.772190 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 23:22:44.772451 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:44.772517 systemd[1]: kubelet.service: Consumed 1.388s CPU time, 127.3M memory peak. Sep 10 23:22:44.774249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 23:22:44.911462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 23:22:44.916031 (kubelet)[2678]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 23:22:44.953046 kubelet[2678]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:22:44.953046 kubelet[2678]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 23:22:44.953046 kubelet[2678]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 23:22:44.953388 kubelet[2678]: I0910 23:22:44.953095 2678 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 23:22:44.958709 kubelet[2678]: I0910 23:22:44.958673 2678 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 23:22:44.958709 kubelet[2678]: I0910 23:22:44.958700 2678 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 23:22:44.958905 kubelet[2678]: I0910 23:22:44.958879 2678 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 23:22:44.960042 kubelet[2678]: I0910 23:22:44.960025 2678 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 10 23:22:44.962291 kubelet[2678]: I0910 23:22:44.962162 2678 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 23:22:44.967131 kubelet[2678]: I0910 23:22:44.967107 2678 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 23:22:44.969652 kubelet[2678]: I0910 23:22:44.969623 2678 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 23:22:44.969849 kubelet[2678]: I0910 23:22:44.969826 2678 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 23:22:44.970025 kubelet[2678]: I0910 23:22:44.969850 2678 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 23:22:44.970108 kubelet[2678]: I0910 23:22:44.970032 2678 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 23:22:44.970108 kubelet[2678]: I0910 23:22:44.970041 2678 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 23:22:44.970108 kubelet[2678]: I0910 23:22:44.970080 2678 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:22:44.970241 kubelet[2678]: I0910 23:22:44.970227 2678 kubelet.go:480] "Attempting to sync node with API server" Sep 10 23:22:44.970269 kubelet[2678]: I0910 23:22:44.970243 2678 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 23:22:44.970269 kubelet[2678]: I0910 23:22:44.970266 2678 kubelet.go:386] "Adding apiserver pod source" Sep 10 23:22:44.970308 kubelet[2678]: I0910 23:22:44.970277 2678 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 23:22:44.971904 kubelet[2678]: I0910 23:22:44.971830 2678 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 23:22:44.972627 kubelet[2678]: I0910 23:22:44.972371 2678 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 23:22:44.974103 kubelet[2678]: I0910 23:22:44.974081 2678 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 23:22:44.974326 kubelet[2678]: I0910 23:22:44.974122 2678 server.go:1289] "Started kubelet" Sep 10 23:22:44.974977 kubelet[2678]: I0910 23:22:44.974802 2678 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 23:22:44.975219 kubelet[2678]: I0910 23:22:44.975073 2678 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 23:22:44.975219 kubelet[2678]: I0910 23:22:44.975124 2678 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 23:22:44.975440 kubelet[2678]: I0910 23:22:44.975385 2678 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 23:22:44.976311 kubelet[2678]: I0910 23:22:44.975972 2678 server.go:317] "Adding debug handlers to kubelet server" Sep 10 23:22:44.978205 kubelet[2678]: I0910 23:22:44.977230 2678 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 23:22:44.985692 kubelet[2678]: I0910 23:22:44.985655 2678 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 23:22:44.985978 kubelet[2678]: E0910 23:22:44.985937 2678 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 23:22:44.986327 kubelet[2678]: I0910 23:22:44.986291 2678 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 23:22:44.986422 kubelet[2678]: I0910 23:22:44.986407 2678 reconciler.go:26] "Reconciler: start to sync state" Sep 10 23:22:44.991373 kubelet[2678]: I0910 23:22:44.991332 2678 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 23:22:44.993769 kubelet[2678]: E0910 23:22:44.993744 2678 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 23:22:44.993769 kubelet[2678]: I0910 23:22:44.993875 2678 factory.go:223] Registration of the containerd container factory successfully Sep 10 23:22:44.993769 kubelet[2678]: I0910 23:22:44.993886 2678 factory.go:223] Registration of the systemd container factory successfully Sep 10 23:22:45.002621 kubelet[2678]: I0910 23:22:45.002570 2678 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 23:22:45.003523 kubelet[2678]: I0910 23:22:45.003487 2678 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 23:22:45.003523 kubelet[2678]: I0910 23:22:45.003515 2678 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 23:22:45.003639 kubelet[2678]: I0910 23:22:45.003540 2678 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 23:22:45.003639 kubelet[2678]: I0910 23:22:45.003547 2678 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 23:22:45.003639 kubelet[2678]: E0910 23:22:45.003600 2678 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 23:22:45.031200 kubelet[2678]: I0910 23:22:45.031084 2678 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 23:22:45.031200 kubelet[2678]: I0910 23:22:45.031103 2678 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 23:22:45.031200 kubelet[2678]: I0910 23:22:45.031144 2678 state_mem.go:36] "Initialized new in-memory state store" Sep 10 23:22:45.031330 kubelet[2678]: I0910 23:22:45.031288 2678 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 23:22:45.031330 kubelet[2678]: I0910 23:22:45.031299 2678 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 23:22:45.031330 kubelet[2678]: I0910 23:22:45.031315 2678 policy_none.go:49] "None policy: Start" Sep 10 23:22:45.031330 kubelet[2678]: I0910 23:22:45.031325 2678 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 23:22:45.031330 kubelet[2678]: I0910 23:22:45.031333 2678 state_mem.go:35] "Initializing new in-memory state store" Sep 10 23:22:45.031429 kubelet[2678]: I0910 23:22:45.031419 2678 state_mem.go:75] "Updated machine memory state" Sep 10 23:22:45.036486 kubelet[2678]: E0910 23:22:45.036015 2678 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 23:22:45.036486 kubelet[2678]: I0910 23:22:45.036256 2678 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 23:22:45.036486 kubelet[2678]: I0910 23:22:45.036271 2678 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 23:22:45.037269 kubelet[2678]: I0910 23:22:45.037248 2678 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 23:22:45.038274 kubelet[2678]: E0910 23:22:45.038251 2678 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 23:22:45.105302 kubelet[2678]: I0910 23:22:45.105257 2678 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:22:45.105409 kubelet[2678]: I0910 23:22:45.105273 2678 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:45.105409 kubelet[2678]: I0910 23:22:45.105378 2678 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:45.111320 kubelet[2678]: E0910 23:22:45.111277 2678 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:45.138708 kubelet[2678]: I0910 23:22:45.138677 2678 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 23:22:45.151418 kubelet[2678]: I0910 23:22:45.151370 2678 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 23:22:45.151556 kubelet[2678]: I0910 23:22:45.151511 2678 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 23:22:45.188041 kubelet[2678]: I0910 23:22:45.188002 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:45.188041 kubelet[2678]: I0910 23:22:45.188041 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:45.188215 kubelet[2678]: I0910 23:22:45.188066 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23fb9015749e6fd52d468f784a73f207-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"23fb9015749e6fd52d468f784a73f207\") " pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:45.188215 kubelet[2678]: I0910 23:22:45.188085 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:45.188215 kubelet[2678]: I0910 23:22:45.188113 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:45.188215 kubelet[2678]: I0910 23:22:45.188134 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:45.188215 kubelet[2678]: I0910 23:22:45.188176 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:45.188324 kubelet[2678]: I0910 23:22:45.188192 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 23:22:45.188324 kubelet[2678]: I0910 23:22:45.188208 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 10 23:22:45.972040 kubelet[2678]: I0910 23:22:45.971992 2678 apiserver.go:52] "Watching apiserver" Sep 10 23:22:45.986717 kubelet[2678]: I0910 23:22:45.986676 2678 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 23:22:46.019019 kubelet[2678]: I0910 23:22:46.018934 2678 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:46.019358 kubelet[2678]: I0910 23:22:46.019131 2678 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 23:22:46.026647 kubelet[2678]: E0910 23:22:46.026608 2678 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 23:22:46.026647 kubelet[2678]: E0910 23:22:46.026670 2678 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 23:22:46.040790 kubelet[2678]: I0910 23:22:46.040733 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.040715275 podStartE2EDuration="1.040715275s" podCreationTimestamp="2025-09-10 23:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:22:46.040701341 +0000 UTC m=+1.121189037" watchObservedRunningTime="2025-09-10 23:22:46.040715275 +0000 UTC m=+1.121202971" Sep 10 23:22:46.061122 kubelet[2678]: I0910 23:22:46.061062 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.061044121 podStartE2EDuration="1.061044121s" podCreationTimestamp="2025-09-10 23:22:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:22:46.048849087 +0000 UTC m=+1.129336823" watchObservedRunningTime="2025-09-10 23:22:46.061044121 +0000 UTC m=+1.141531817" Sep 10 23:22:46.061368 kubelet[2678]: I0910 23:22:46.061201 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.061195069 podStartE2EDuration="2.061195069s" podCreationTimestamp="2025-09-10 23:22:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:22:46.060789111 +0000 UTC m=+1.141276807" watchObservedRunningTime="2025-09-10 23:22:46.061195069 +0000 UTC m=+1.141682765" Sep 10 23:22:49.973181 kubelet[2678]: I0910 23:22:49.973116 2678 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 23:22:49.973588 containerd[1527]: time="2025-09-10T23:22:49.973471063Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 23:22:49.973752 kubelet[2678]: I0910 23:22:49.973628 2678 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 23:22:50.862111 systemd[1]: Created slice kubepods-besteffort-pod6ad48ae1_e290_40be_ad91_b970e623d793.slice - libcontainer container kubepods-besteffort-pod6ad48ae1_e290_40be_ad91_b970e623d793.slice. Sep 10 23:22:50.927205 kubelet[2678]: I0910 23:22:50.927167 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6ad48ae1-e290-40be-ad91-b970e623d793-kube-proxy\") pod \"kube-proxy-ll42k\" (UID: \"6ad48ae1-e290-40be-ad91-b970e623d793\") " pod="kube-system/kube-proxy-ll42k" Sep 10 23:22:50.927335 kubelet[2678]: I0910 23:22:50.927211 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlsrb\" (UniqueName: \"kubernetes.io/projected/6ad48ae1-e290-40be-ad91-b970e623d793-kube-api-access-wlsrb\") pod \"kube-proxy-ll42k\" (UID: \"6ad48ae1-e290-40be-ad91-b970e623d793\") " pod="kube-system/kube-proxy-ll42k" Sep 10 23:22:50.927335 kubelet[2678]: I0910 23:22:50.927248 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ad48ae1-e290-40be-ad91-b970e623d793-xtables-lock\") pod \"kube-proxy-ll42k\" (UID: \"6ad48ae1-e290-40be-ad91-b970e623d793\") " pod="kube-system/kube-proxy-ll42k" Sep 10 23:22:50.927335 kubelet[2678]: I0910 23:22:50.927269 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ad48ae1-e290-40be-ad91-b970e623d793-lib-modules\") pod \"kube-proxy-ll42k\" (UID: \"6ad48ae1-e290-40be-ad91-b970e623d793\") " pod="kube-system/kube-proxy-ll42k" Sep 10 23:22:51.142352 systemd[1]: Created slice kubepods-besteffort-pod162d67c4_9024_44ca_a206_176db4c55202.slice - libcontainer container kubepods-besteffort-pod162d67c4_9024_44ca_a206_176db4c55202.slice. Sep 10 23:22:51.181694 containerd[1527]: time="2025-09-10T23:22:51.181660890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ll42k,Uid:6ad48ae1-e290-40be-ad91-b970e623d793,Namespace:kube-system,Attempt:0,}" Sep 10 23:22:51.197031 containerd[1527]: time="2025-09-10T23:22:51.196992916Z" level=info msg="connecting to shim 2e3e17d9ec6504e20b513806b6e9ff5eeeeb1aeed0712ec32d0601423f04cf38" address="unix:///run/containerd/s/e07b7287e2671175cbda45626f60cd52fa603f769a30f902763e809983855380" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:51.223643 systemd[1]: Started cri-containerd-2e3e17d9ec6504e20b513806b6e9ff5eeeeb1aeed0712ec32d0601423f04cf38.scope - libcontainer container 2e3e17d9ec6504e20b513806b6e9ff5eeeeb1aeed0712ec32d0601423f04cf38. Sep 10 23:22:51.228486 kubelet[2678]: I0910 23:22:51.228455 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/162d67c4-9024-44ca-a206-176db4c55202-var-lib-calico\") pod \"tigera-operator-755d956888-tt8nk\" (UID: \"162d67c4-9024-44ca-a206-176db4c55202\") " pod="tigera-operator/tigera-operator-755d956888-tt8nk" Sep 10 23:22:51.228950 kubelet[2678]: I0910 23:22:51.228578 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlj5b\" (UniqueName: \"kubernetes.io/projected/162d67c4-9024-44ca-a206-176db4c55202-kube-api-access-wlj5b\") pod \"tigera-operator-755d956888-tt8nk\" (UID: \"162d67c4-9024-44ca-a206-176db4c55202\") " pod="tigera-operator/tigera-operator-755d956888-tt8nk" Sep 10 23:22:51.245665 containerd[1527]: time="2025-09-10T23:22:51.245627709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ll42k,Uid:6ad48ae1-e290-40be-ad91-b970e623d793,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e3e17d9ec6504e20b513806b6e9ff5eeeeb1aeed0712ec32d0601423f04cf38\"" Sep 10 23:22:51.249671 containerd[1527]: time="2025-09-10T23:22:51.249600209Z" level=info msg="CreateContainer within sandbox \"2e3e17d9ec6504e20b513806b6e9ff5eeeeb1aeed0712ec32d0601423f04cf38\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 23:22:51.259182 containerd[1527]: time="2025-09-10T23:22:51.258651908Z" level=info msg="Container b4c872951f37f6ca58d7c7859b72750e5f7162e8bb78888d94121a0e234d5381: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:51.264951 containerd[1527]: time="2025-09-10T23:22:51.264916424Z" level=info msg="CreateContainer within sandbox \"2e3e17d9ec6504e20b513806b6e9ff5eeeeb1aeed0712ec32d0601423f04cf38\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b4c872951f37f6ca58d7c7859b72750e5f7162e8bb78888d94121a0e234d5381\"" Sep 10 23:22:51.265481 containerd[1527]: time="2025-09-10T23:22:51.265457264Z" level=info msg="StartContainer for \"b4c872951f37f6ca58d7c7859b72750e5f7162e8bb78888d94121a0e234d5381\"" Sep 10 23:22:51.268314 containerd[1527]: time="2025-09-10T23:22:51.268215906Z" level=info msg="connecting to shim b4c872951f37f6ca58d7c7859b72750e5f7162e8bb78888d94121a0e234d5381" address="unix:///run/containerd/s/e07b7287e2671175cbda45626f60cd52fa603f769a30f902763e809983855380" protocol=ttrpc version=3 Sep 10 23:22:51.296383 systemd[1]: Started cri-containerd-b4c872951f37f6ca58d7c7859b72750e5f7162e8bb78888d94121a0e234d5381.scope - libcontainer container b4c872951f37f6ca58d7c7859b72750e5f7162e8bb78888d94121a0e234d5381. Sep 10 23:22:51.334130 containerd[1527]: time="2025-09-10T23:22:51.334089217Z" level=info msg="StartContainer for \"b4c872951f37f6ca58d7c7859b72750e5f7162e8bb78888d94121a0e234d5381\" returns successfully" Sep 10 23:22:51.449423 containerd[1527]: time="2025-09-10T23:22:51.449084681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-tt8nk,Uid:162d67c4-9024-44ca-a206-176db4c55202,Namespace:tigera-operator,Attempt:0,}" Sep 10 23:22:51.464373 containerd[1527]: time="2025-09-10T23:22:51.464323118Z" level=info msg="connecting to shim b014abb99e42579fe05af49ec100c738eb4fd413d4e6e0d01ccd09661e5dce77" address="unix:///run/containerd/s/9ec1c419a14dee78a7f86fafdeaa76ac67472e6c41dffa9812828078b559bc39" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:22:51.487352 systemd[1]: Started cri-containerd-b014abb99e42579fe05af49ec100c738eb4fd413d4e6e0d01ccd09661e5dce77.scope - libcontainer container b014abb99e42579fe05af49ec100c738eb4fd413d4e6e0d01ccd09661e5dce77. Sep 10 23:22:51.527149 containerd[1527]: time="2025-09-10T23:22:51.527104060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-tt8nk,Uid:162d67c4-9024-44ca-a206-176db4c55202,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b014abb99e42579fe05af49ec100c738eb4fd413d4e6e0d01ccd09661e5dce77\"" Sep 10 23:22:51.528830 containerd[1527]: time="2025-09-10T23:22:51.528783743Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 10 23:22:52.044712 kubelet[2678]: I0910 23:22:52.042700 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ll42k" podStartSLOduration=2.042683709 podStartE2EDuration="2.042683709s" podCreationTimestamp="2025-09-10 23:22:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:22:52.042219583 +0000 UTC m=+7.122707279" watchObservedRunningTime="2025-09-10 23:22:52.042683709 +0000 UTC m=+7.123171365" Sep 10 23:22:53.169094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1992520081.mount: Deactivated successfully. Sep 10 23:22:54.161942 containerd[1527]: time="2025-09-10T23:22:54.161886216Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:54.162863 containerd[1527]: time="2025-09-10T23:22:54.162822965Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 10 23:22:54.162978 containerd[1527]: time="2025-09-10T23:22:54.162957129Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:54.165366 containerd[1527]: time="2025-09-10T23:22:54.165320136Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:22:54.165745 containerd[1527]: time="2025-09-10T23:22:54.165715624Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 2.636880805s" Sep 10 23:22:54.165745 containerd[1527]: time="2025-09-10T23:22:54.165743202Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 10 23:22:54.173653 containerd[1527]: time="2025-09-10T23:22:54.173569885Z" level=info msg="CreateContainer within sandbox \"b014abb99e42579fe05af49ec100c738eb4fd413d4e6e0d01ccd09661e5dce77\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 10 23:22:54.206693 containerd[1527]: time="2025-09-10T23:22:54.206646892Z" level=info msg="Container d16355269f100edb019b18e9e799ad46b0cb89940002b182198195ab7657b467: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:22:54.209895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3787858141.mount: Deactivated successfully. Sep 10 23:22:54.212188 containerd[1527]: time="2025-09-10T23:22:54.212132382Z" level=info msg="CreateContainer within sandbox \"b014abb99e42579fe05af49ec100c738eb4fd413d4e6e0d01ccd09661e5dce77\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d16355269f100edb019b18e9e799ad46b0cb89940002b182198195ab7657b467\"" Sep 10 23:22:54.212802 containerd[1527]: time="2025-09-10T23:22:54.212746809Z" level=info msg="StartContainer for \"d16355269f100edb019b18e9e799ad46b0cb89940002b182198195ab7657b467\"" Sep 10 23:22:54.213852 containerd[1527]: time="2025-09-10T23:22:54.213811199Z" level=info msg="connecting to shim d16355269f100edb019b18e9e799ad46b0cb89940002b182198195ab7657b467" address="unix:///run/containerd/s/9ec1c419a14dee78a7f86fafdeaa76ac67472e6c41dffa9812828078b559bc39" protocol=ttrpc version=3 Sep 10 23:22:54.237314 systemd[1]: Started cri-containerd-d16355269f100edb019b18e9e799ad46b0cb89940002b182198195ab7657b467.scope - libcontainer container d16355269f100edb019b18e9e799ad46b0cb89940002b182198195ab7657b467. Sep 10 23:22:54.268711 containerd[1527]: time="2025-09-10T23:22:54.268664023Z" level=info msg="StartContainer for \"d16355269f100edb019b18e9e799ad46b0cb89940002b182198195ab7657b467\" returns successfully" Sep 10 23:22:55.048774 kubelet[2678]: I0910 23:22:55.048693 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-tt8nk" podStartSLOduration=1.407815949 podStartE2EDuration="4.048674578s" podCreationTimestamp="2025-09-10 23:22:51 +0000 UTC" firstStartedPulling="2025-09-10 23:22:51.528443852 +0000 UTC m=+6.608931548" lastFinishedPulling="2025-09-10 23:22:54.169302481 +0000 UTC m=+9.249790177" observedRunningTime="2025-09-10 23:22:55.048151706 +0000 UTC m=+10.128639442" watchObservedRunningTime="2025-09-10 23:22:55.048674578 +0000 UTC m=+10.129162274" Sep 10 23:22:58.531840 update_engine[1502]: I20250910 23:22:58.531770 1502 update_attempter.cc:509] Updating boot flags... Sep 10 23:22:59.553200 sudo[1732]: pam_unix(sudo:session): session closed for user root Sep 10 23:22:59.556389 sshd[1731]: Connection closed by 10.0.0.1 port 51232 Sep 10 23:22:59.556825 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Sep 10 23:22:59.560480 systemd[1]: sshd@6-10.0.0.24:22-10.0.0.1:51232.service: Deactivated successfully. Sep 10 23:22:59.563718 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 23:22:59.564100 systemd[1]: session-7.scope: Consumed 7.044s CPU time, 219.2M memory peak. Sep 10 23:22:59.565219 systemd-logind[1498]: Session 7 logged out. Waiting for processes to exit. Sep 10 23:22:59.569653 systemd-logind[1498]: Removed session 7. Sep 10 23:23:04.769687 systemd[1]: Created slice kubepods-besteffort-podd5b2904e_4eef_44f8_ad12_97e9492d78cd.slice - libcontainer container kubepods-besteffort-podd5b2904e_4eef_44f8_ad12_97e9492d78cd.slice. Sep 10 23:23:04.810723 kubelet[2678]: I0910 23:23:04.810659 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d5b2904e-4eef-44f8-ad12-97e9492d78cd-tigera-ca-bundle\") pod \"calico-typha-857c685d5b-k6ncc\" (UID: \"d5b2904e-4eef-44f8-ad12-97e9492d78cd\") " pod="calico-system/calico-typha-857c685d5b-k6ncc" Sep 10 23:23:04.810723 kubelet[2678]: I0910 23:23:04.810712 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d5b2904e-4eef-44f8-ad12-97e9492d78cd-typha-certs\") pod \"calico-typha-857c685d5b-k6ncc\" (UID: \"d5b2904e-4eef-44f8-ad12-97e9492d78cd\") " pod="calico-system/calico-typha-857c685d5b-k6ncc" Sep 10 23:23:04.810723 kubelet[2678]: I0910 23:23:04.810734 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4thsw\" (UniqueName: \"kubernetes.io/projected/d5b2904e-4eef-44f8-ad12-97e9492d78cd-kube-api-access-4thsw\") pod \"calico-typha-857c685d5b-k6ncc\" (UID: \"d5b2904e-4eef-44f8-ad12-97e9492d78cd\") " pod="calico-system/calico-typha-857c685d5b-k6ncc" Sep 10 23:23:05.022941 systemd[1]: Created slice kubepods-besteffort-podb4ba29d7_96f7_4e3c_b548_8d19db651371.slice - libcontainer container kubepods-besteffort-podb4ba29d7_96f7_4e3c_b548_8d19db651371.slice. Sep 10 23:23:05.083037 containerd[1527]: time="2025-09-10T23:23:05.082760465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-857c685d5b-k6ncc,Uid:d5b2904e-4eef-44f8-ad12-97e9492d78cd,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:05.112622 kubelet[2678]: I0910 23:23:05.112558 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b4ba29d7-96f7-4e3c-b548-8d19db651371-cni-bin-dir\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.112622 kubelet[2678]: I0910 23:23:05.112610 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b4ba29d7-96f7-4e3c-b548-8d19db651371-cni-net-dir\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.112622 kubelet[2678]: I0910 23:23:05.112627 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4ba29d7-96f7-4e3c-b548-8d19db651371-xtables-lock\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.112815 kubelet[2678]: I0910 23:23:05.112646 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4ba29d7-96f7-4e3c-b548-8d19db651371-lib-modules\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.112815 kubelet[2678]: I0910 23:23:05.112663 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4ba29d7-96f7-4e3c-b548-8d19db651371-tigera-ca-bundle\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.112815 kubelet[2678]: I0910 23:23:05.112690 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b4ba29d7-96f7-4e3c-b548-8d19db651371-flexvol-driver-host\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.112815 kubelet[2678]: I0910 23:23:05.112711 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b4ba29d7-96f7-4e3c-b548-8d19db651371-policysync\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.112815 kubelet[2678]: I0910 23:23:05.112727 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b4ba29d7-96f7-4e3c-b548-8d19db651371-var-run-calico\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.112919 kubelet[2678]: I0910 23:23:05.112743 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmbmj\" (UniqueName: \"kubernetes.io/projected/b4ba29d7-96f7-4e3c-b548-8d19db651371-kube-api-access-kmbmj\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.112919 kubelet[2678]: I0910 23:23:05.112761 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b4ba29d7-96f7-4e3c-b548-8d19db651371-cni-log-dir\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.112919 kubelet[2678]: I0910 23:23:05.112776 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b4ba29d7-96f7-4e3c-b548-8d19db651371-node-certs\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.112919 kubelet[2678]: I0910 23:23:05.112794 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b4ba29d7-96f7-4e3c-b548-8d19db651371-var-lib-calico\") pod \"calico-node-7hl6q\" (UID: \"b4ba29d7-96f7-4e3c-b548-8d19db651371\") " pod="calico-system/calico-node-7hl6q" Sep 10 23:23:05.134419 containerd[1527]: time="2025-09-10T23:23:05.134366929Z" level=info msg="connecting to shim e7f79ad4bfc48aab93a22e6e6163780109da2ed8f6bbae248a4293bdc249ad33" address="unix:///run/containerd/s/6dba0a6e36c0f9b4ed86bce7a7b7fb0e5b9cc8587f590963096fd4c55b2bb6b2" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:05.175293 systemd[1]: Started cri-containerd-e7f79ad4bfc48aab93a22e6e6163780109da2ed8f6bbae248a4293bdc249ad33.scope - libcontainer container e7f79ad4bfc48aab93a22e6e6163780109da2ed8f6bbae248a4293bdc249ad33. Sep 10 23:23:05.219412 containerd[1527]: time="2025-09-10T23:23:05.219370536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-857c685d5b-k6ncc,Uid:d5b2904e-4eef-44f8-ad12-97e9492d78cd,Namespace:calico-system,Attempt:0,} returns sandbox id \"e7f79ad4bfc48aab93a22e6e6163780109da2ed8f6bbae248a4293bdc249ad33\"" Sep 10 23:23:05.229168 kubelet[2678]: E0910 23:23:05.225711 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.229168 kubelet[2678]: W0910 23:23:05.225748 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.234151 kubelet[2678]: E0910 23:23:05.232178 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.234151 kubelet[2678]: E0910 23:23:05.232628 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.234151 kubelet[2678]: W0910 23:23:05.232642 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.234151 kubelet[2678]: E0910 23:23:05.232744 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.239522 kubelet[2678]: E0910 23:23:05.239280 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.239778 kubelet[2678]: W0910 23:23:05.239688 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.239778 kubelet[2678]: E0910 23:23:05.239716 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.240922 containerd[1527]: time="2025-09-10T23:23:05.240876660Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 10 23:23:05.266947 kubelet[2678]: E0910 23:23:05.266894 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bbr5" podUID="41e05e41-b646-4243-9c89-ac4eb4228756" Sep 10 23:23:05.301861 kubelet[2678]: E0910 23:23:05.301683 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.301861 kubelet[2678]: W0910 23:23:05.301709 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.301861 kubelet[2678]: E0910 23:23:05.301729 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.302303 kubelet[2678]: E0910 23:23:05.302200 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.307287 kubelet[2678]: W0910 23:23:05.302216 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.307504 kubelet[2678]: E0910 23:23:05.307428 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.307833 kubelet[2678]: E0910 23:23:05.307734 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.307833 kubelet[2678]: W0910 23:23:05.307762 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.307833 kubelet[2678]: E0910 23:23:05.307775 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.308132 kubelet[2678]: E0910 23:23:05.308118 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.308244 kubelet[2678]: W0910 23:23:05.308229 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.308311 kubelet[2678]: E0910 23:23:05.308298 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.308560 kubelet[2678]: E0910 23:23:05.308548 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.308637 kubelet[2678]: W0910 23:23:05.308623 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.308702 kubelet[2678]: E0910 23:23:05.308689 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.308967 kubelet[2678]: E0910 23:23:05.308954 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.309040 kubelet[2678]: W0910 23:23:05.309026 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.309110 kubelet[2678]: E0910 23:23:05.309096 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.309356 kubelet[2678]: E0910 23:23:05.309344 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.309439 kubelet[2678]: W0910 23:23:05.309425 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.309507 kubelet[2678]: E0910 23:23:05.309493 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.309815 kubelet[2678]: E0910 23:23:05.309736 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.309815 kubelet[2678]: W0910 23:23:05.309756 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.309815 kubelet[2678]: E0910 23:23:05.309769 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.310087 kubelet[2678]: E0910 23:23:05.310072 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.310305 kubelet[2678]: W0910 23:23:05.310191 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.310305 kubelet[2678]: E0910 23:23:05.310210 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.310455 kubelet[2678]: E0910 23:23:05.310442 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.310515 kubelet[2678]: W0910 23:23:05.310503 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.310593 kubelet[2678]: E0910 23:23:05.310580 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.310819 kubelet[2678]: E0910 23:23:05.310806 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.310926 kubelet[2678]: W0910 23:23:05.310873 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.310926 kubelet[2678]: E0910 23:23:05.310889 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.311196 kubelet[2678]: E0910 23:23:05.311131 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.311403 kubelet[2678]: W0910 23:23:05.311383 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.311549 kubelet[2678]: E0910 23:23:05.311532 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.311852 kubelet[2678]: E0910 23:23:05.311795 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.311852 kubelet[2678]: W0910 23:23:05.311808 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.311852 kubelet[2678]: E0910 23:23:05.311818 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.312118 kubelet[2678]: E0910 23:23:05.312107 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.312220 kubelet[2678]: W0910 23:23:05.312206 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.312286 kubelet[2678]: E0910 23:23:05.312274 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.312511 kubelet[2678]: E0910 23:23:05.312499 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.312581 kubelet[2678]: W0910 23:23:05.312568 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.312642 kubelet[2678]: E0910 23:23:05.312630 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.312866 kubelet[2678]: E0910 23:23:05.312852 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.312996 kubelet[2678]: W0910 23:23:05.312928 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.312996 kubelet[2678]: E0910 23:23:05.312943 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.313303 kubelet[2678]: E0910 23:23:05.313287 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.313484 kubelet[2678]: W0910 23:23:05.313375 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.313484 kubelet[2678]: E0910 23:23:05.313393 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.313599 kubelet[2678]: E0910 23:23:05.313589 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.313659 kubelet[2678]: W0910 23:23:05.313647 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.313717 kubelet[2678]: E0910 23:23:05.313705 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.313992 kubelet[2678]: E0910 23:23:05.313935 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.313992 kubelet[2678]: W0910 23:23:05.313948 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.313992 kubelet[2678]: E0910 23:23:05.313957 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.314245 kubelet[2678]: E0910 23:23:05.314232 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.314321 kubelet[2678]: W0910 23:23:05.314309 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.314385 kubelet[2678]: E0910 23:23:05.314374 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.317716 kubelet[2678]: E0910 23:23:05.317608 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.317716 kubelet[2678]: W0910 23:23:05.317623 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.317716 kubelet[2678]: E0910 23:23:05.317634 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.317716 kubelet[2678]: I0910 23:23:05.317660 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhtf5\" (UniqueName: \"kubernetes.io/projected/41e05e41-b646-4243-9c89-ac4eb4228756-kube-api-access-zhtf5\") pod \"csi-node-driver-5bbr5\" (UID: \"41e05e41-b646-4243-9c89-ac4eb4228756\") " pod="calico-system/csi-node-driver-5bbr5" Sep 10 23:23:05.318118 kubelet[2678]: E0910 23:23:05.318029 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.318118 kubelet[2678]: W0910 23:23:05.318044 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.318118 kubelet[2678]: E0910 23:23:05.318055 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.318118 kubelet[2678]: I0910 23:23:05.318073 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/41e05e41-b646-4243-9c89-ac4eb4228756-registration-dir\") pod \"csi-node-driver-5bbr5\" (UID: \"41e05e41-b646-4243-9c89-ac4eb4228756\") " pod="calico-system/csi-node-driver-5bbr5" Sep 10 23:23:05.318466 kubelet[2678]: E0910 23:23:05.318393 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.318466 kubelet[2678]: W0910 23:23:05.318407 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.318466 kubelet[2678]: E0910 23:23:05.318417 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.318466 kubelet[2678]: I0910 23:23:05.318444 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/41e05e41-b646-4243-9c89-ac4eb4228756-varrun\") pod \"csi-node-driver-5bbr5\" (UID: \"41e05e41-b646-4243-9c89-ac4eb4228756\") " pod="calico-system/csi-node-driver-5bbr5" Sep 10 23:23:05.318765 kubelet[2678]: E0910 23:23:05.318737 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.318765 kubelet[2678]: W0910 23:23:05.318764 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.318864 kubelet[2678]: E0910 23:23:05.318778 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.318958 kubelet[2678]: E0910 23:23:05.318946 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.318958 kubelet[2678]: W0910 23:23:05.318957 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.319042 kubelet[2678]: E0910 23:23:05.318965 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.319154 kubelet[2678]: E0910 23:23:05.319132 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.319187 kubelet[2678]: W0910 23:23:05.319153 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.319187 kubelet[2678]: E0910 23:23:05.319171 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.319309 kubelet[2678]: E0910 23:23:05.319298 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.319309 kubelet[2678]: W0910 23:23:05.319309 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.319400 kubelet[2678]: E0910 23:23:05.319316 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.319476 kubelet[2678]: E0910 23:23:05.319463 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.319476 kubelet[2678]: W0910 23:23:05.319475 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.319537 kubelet[2678]: E0910 23:23:05.319483 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.319619 kubelet[2678]: E0910 23:23:05.319609 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.319619 kubelet[2678]: W0910 23:23:05.319618 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.319748 kubelet[2678]: E0910 23:23:05.319627 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.319797 kubelet[2678]: E0910 23:23:05.319774 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.319797 kubelet[2678]: W0910 23:23:05.319783 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.319797 kubelet[2678]: E0910 23:23:05.319793 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.319900 kubelet[2678]: I0910 23:23:05.319815 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/41e05e41-b646-4243-9c89-ac4eb4228756-kubelet-dir\") pod \"csi-node-driver-5bbr5\" (UID: \"41e05e41-b646-4243-9c89-ac4eb4228756\") " pod="calico-system/csi-node-driver-5bbr5" Sep 10 23:23:05.319968 kubelet[2678]: E0910 23:23:05.319954 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.319968 kubelet[2678]: W0910 23:23:05.319965 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.320097 kubelet[2678]: E0910 23:23:05.319974 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.320097 kubelet[2678]: I0910 23:23:05.319991 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/41e05e41-b646-4243-9c89-ac4eb4228756-socket-dir\") pod \"csi-node-driver-5bbr5\" (UID: \"41e05e41-b646-4243-9c89-ac4eb4228756\") " pod="calico-system/csi-node-driver-5bbr5" Sep 10 23:23:05.320490 kubelet[2678]: E0910 23:23:05.320414 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.320490 kubelet[2678]: W0910 23:23:05.320432 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.320490 kubelet[2678]: E0910 23:23:05.320444 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.320762 kubelet[2678]: E0910 23:23:05.320740 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.320955 kubelet[2678]: W0910 23:23:05.320824 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.320955 kubelet[2678]: E0910 23:23:05.320841 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.321084 kubelet[2678]: E0910 23:23:05.321073 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.321258 kubelet[2678]: W0910 23:23:05.321130 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.321353 kubelet[2678]: E0910 23:23:05.321339 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.321591 kubelet[2678]: E0910 23:23:05.321579 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.321663 kubelet[2678]: W0910 23:23:05.321650 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.321715 kubelet[2678]: E0910 23:23:05.321704 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.326379 containerd[1527]: time="2025-09-10T23:23:05.326313505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hl6q,Uid:b4ba29d7-96f7-4e3c-b548-8d19db651371,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:05.363360 containerd[1527]: time="2025-09-10T23:23:05.363305598Z" level=info msg="connecting to shim 16c9e2c357efa901e020396bf1b95d7c65ba9f53bb50ed406b68c275eb85b3f4" address="unix:///run/containerd/s/6863164376a8807cf51fb12ffe7b52d1c57c5e60367f9adf6d57f48a59078091" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:05.386330 systemd[1]: Started cri-containerd-16c9e2c357efa901e020396bf1b95d7c65ba9f53bb50ed406b68c275eb85b3f4.scope - libcontainer container 16c9e2c357efa901e020396bf1b95d7c65ba9f53bb50ed406b68c275eb85b3f4. Sep 10 23:23:05.421164 kubelet[2678]: E0910 23:23:05.421050 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.421164 kubelet[2678]: W0910 23:23:05.421076 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.421164 kubelet[2678]: E0910 23:23:05.421094 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.421342 kubelet[2678]: E0910 23:23:05.421318 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.421342 kubelet[2678]: W0910 23:23:05.421327 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.421342 kubelet[2678]: E0910 23:23:05.421337 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.421517 kubelet[2678]: E0910 23:23:05.421498 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.421517 kubelet[2678]: W0910 23:23:05.421513 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.421610 kubelet[2678]: E0910 23:23:05.421522 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.421691 kubelet[2678]: E0910 23:23:05.421677 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.421691 kubelet[2678]: W0910 23:23:05.421689 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.421810 kubelet[2678]: E0910 23:23:05.421698 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.422151 kubelet[2678]: E0910 23:23:05.422066 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.422151 kubelet[2678]: W0910 23:23:05.422092 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.422151 kubelet[2678]: E0910 23:23:05.422107 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.422577 kubelet[2678]: E0910 23:23:05.422471 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.422577 kubelet[2678]: W0910 23:23:05.422485 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.422577 kubelet[2678]: E0910 23:23:05.422496 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.422823 kubelet[2678]: E0910 23:23:05.422809 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.422894 kubelet[2678]: W0910 23:23:05.422881 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.422953 kubelet[2678]: E0910 23:23:05.422942 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.423296 kubelet[2678]: E0910 23:23:05.423216 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.423296 kubelet[2678]: W0910 23:23:05.423228 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.423296 kubelet[2678]: E0910 23:23:05.423239 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.423602 kubelet[2678]: E0910 23:23:05.423559 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.423602 kubelet[2678]: W0910 23:23:05.423573 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.423602 kubelet[2678]: E0910 23:23:05.423583 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.423950 kubelet[2678]: E0910 23:23:05.423913 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.423950 kubelet[2678]: W0910 23:23:05.423927 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.423950 kubelet[2678]: E0910 23:23:05.423937 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.424601 kubelet[2678]: E0910 23:23:05.424270 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.424601 kubelet[2678]: W0910 23:23:05.424283 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.424601 kubelet[2678]: E0910 23:23:05.424294 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.425090 kubelet[2678]: E0910 23:23:05.425073 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.425209 kubelet[2678]: W0910 23:23:05.425195 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.425266 kubelet[2678]: E0910 23:23:05.425255 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.425583 kubelet[2678]: E0910 23:23:05.425568 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.425725 kubelet[2678]: W0910 23:23:05.425630 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.425725 kubelet[2678]: E0910 23:23:05.425645 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.425994 kubelet[2678]: E0910 23:23:05.425980 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.426174 kubelet[2678]: W0910 23:23:05.426045 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.426174 kubelet[2678]: E0910 23:23:05.426061 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.426497 kubelet[2678]: E0910 23:23:05.426451 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.426748 kubelet[2678]: W0910 23:23:05.426703 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.426844 kubelet[2678]: E0910 23:23:05.426827 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.428264 kubelet[2678]: E0910 23:23:05.428246 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.428419 kubelet[2678]: W0910 23:23:05.428334 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.428419 kubelet[2678]: E0910 23:23:05.428353 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.429131 kubelet[2678]: E0910 23:23:05.429061 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.429448 kubelet[2678]: W0910 23:23:05.429318 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.429448 kubelet[2678]: E0910 23:23:05.429344 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.430180 kubelet[2678]: E0910 23:23:05.429603 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.430239 kubelet[2678]: W0910 23:23:05.430196 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.430239 kubelet[2678]: E0910 23:23:05.430226 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.430460 kubelet[2678]: E0910 23:23:05.430443 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.430460 kubelet[2678]: W0910 23:23:05.430456 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.430543 kubelet[2678]: E0910 23:23:05.430467 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.430627 kubelet[2678]: E0910 23:23:05.430613 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.430627 kubelet[2678]: W0910 23:23:05.430624 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.430675 kubelet[2678]: E0910 23:23:05.430633 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.430807 kubelet[2678]: E0910 23:23:05.430784 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.430807 kubelet[2678]: W0910 23:23:05.430804 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.430881 kubelet[2678]: E0910 23:23:05.430814 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.431340 kubelet[2678]: E0910 23:23:05.430965 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.431340 kubelet[2678]: W0910 23:23:05.430977 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.431340 kubelet[2678]: E0910 23:23:05.430985 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.431340 kubelet[2678]: E0910 23:23:05.431126 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.431340 kubelet[2678]: W0910 23:23:05.431133 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.431340 kubelet[2678]: E0910 23:23:05.431154 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.431340 kubelet[2678]: E0910 23:23:05.431329 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.431340 kubelet[2678]: W0910 23:23:05.431340 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.431340 kubelet[2678]: E0910 23:23:05.431350 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.431883 kubelet[2678]: E0910 23:23:05.431502 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.431883 kubelet[2678]: W0910 23:23:05.431510 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.431883 kubelet[2678]: E0910 23:23:05.431518 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:05.444383 containerd[1527]: time="2025-09-10T23:23:05.444345238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hl6q,Uid:b4ba29d7-96f7-4e3c-b548-8d19db651371,Namespace:calico-system,Attempt:0,} returns sandbox id \"16c9e2c357efa901e020396bf1b95d7c65ba9f53bb50ed406b68c275eb85b3f4\"" Sep 10 23:23:05.444845 kubelet[2678]: E0910 23:23:05.444822 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:05.444845 kubelet[2678]: W0910 23:23:05.444841 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:05.444935 kubelet[2678]: E0910 23:23:05.444860 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:06.218900 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3501385310.mount: Deactivated successfully. Sep 10 23:23:06.920421 containerd[1527]: time="2025-09-10T23:23:06.920382180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:06.922615 containerd[1527]: time="2025-09-10T23:23:06.922423892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 10 23:23:06.923568 containerd[1527]: time="2025-09-10T23:23:06.923534039Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:06.925701 containerd[1527]: time="2025-09-10T23:23:06.925657660Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:06.926609 containerd[1527]: time="2025-09-10T23:23:06.926580982Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.685651383s" Sep 10 23:23:06.926847 containerd[1527]: time="2025-09-10T23:23:06.926611592Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 10 23:23:06.928604 containerd[1527]: time="2025-09-10T23:23:06.928349759Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 10 23:23:06.942639 containerd[1527]: time="2025-09-10T23:23:06.942592566Z" level=info msg="CreateContainer within sandbox \"e7f79ad4bfc48aab93a22e6e6163780109da2ed8f6bbae248a4293bdc249ad33\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 10 23:23:06.950349 containerd[1527]: time="2025-09-10T23:23:06.950300415Z" level=info msg="Container 8c839c487f2023421eb7ec3173b08763ea7e1a53d45576489002dae609eac1f5: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:06.957305 containerd[1527]: time="2025-09-10T23:23:06.957260882Z" level=info msg="CreateContainer within sandbox \"e7f79ad4bfc48aab93a22e6e6163780109da2ed8f6bbae248a4293bdc249ad33\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8c839c487f2023421eb7ec3173b08763ea7e1a53d45576489002dae609eac1f5\"" Sep 10 23:23:06.957976 containerd[1527]: time="2025-09-10T23:23:06.957778423Z" level=info msg="StartContainer for \"8c839c487f2023421eb7ec3173b08763ea7e1a53d45576489002dae609eac1f5\"" Sep 10 23:23:06.959048 containerd[1527]: time="2025-09-10T23:23:06.959014414Z" level=info msg="connecting to shim 8c839c487f2023421eb7ec3173b08763ea7e1a53d45576489002dae609eac1f5" address="unix:///run/containerd/s/6dba0a6e36c0f9b4ed86bce7a7b7fb0e5b9cc8587f590963096fd4c55b2bb6b2" protocol=ttrpc version=3 Sep 10 23:23:06.979553 systemd[1]: Started cri-containerd-8c839c487f2023421eb7ec3173b08763ea7e1a53d45576489002dae609eac1f5.scope - libcontainer container 8c839c487f2023421eb7ec3173b08763ea7e1a53d45576489002dae609eac1f5. Sep 10 23:23:07.004036 kubelet[2678]: E0910 23:23:07.003987 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bbr5" podUID="41e05e41-b646-4243-9c89-ac4eb4228756" Sep 10 23:23:07.038344 containerd[1527]: time="2025-09-10T23:23:07.038272742Z" level=info msg="StartContainer for \"8c839c487f2023421eb7ec3173b08763ea7e1a53d45576489002dae609eac1f5\" returns successfully" Sep 10 23:23:07.084749 kubelet[2678]: I0910 23:23:07.084271 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-857c685d5b-k6ncc" podStartSLOduration=1.39519973 podStartE2EDuration="3.084220999s" podCreationTimestamp="2025-09-10 23:23:04 +0000 UTC" firstStartedPulling="2025-09-10 23:23:05.238982569 +0000 UTC m=+20.319470265" lastFinishedPulling="2025-09-10 23:23:06.928003878 +0000 UTC m=+22.008491534" observedRunningTime="2025-09-10 23:23:07.083386241 +0000 UTC m=+22.163874017" watchObservedRunningTime="2025-09-10 23:23:07.084220999 +0000 UTC m=+22.164708655" Sep 10 23:23:07.129948 kubelet[2678]: E0910 23:23:07.129823 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.130351 kubelet[2678]: W0910 23:23:07.130170 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.130351 kubelet[2678]: E0910 23:23:07.130201 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.131152 kubelet[2678]: E0910 23:23:07.130890 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.131152 kubelet[2678]: W0910 23:23:07.130907 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.131152 kubelet[2678]: E0910 23:23:07.130969 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.131592 kubelet[2678]: E0910 23:23:07.131479 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.131696 kubelet[2678]: W0910 23:23:07.131653 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.132190 kubelet[2678]: E0910 23:23:07.132170 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.132491 kubelet[2678]: E0910 23:23:07.132474 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.132694 kubelet[2678]: W0910 23:23:07.132679 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.134457 kubelet[2678]: E0910 23:23:07.134319 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.134680 kubelet[2678]: E0910 23:23:07.134571 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.134680 kubelet[2678]: W0910 23:23:07.134585 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.134680 kubelet[2678]: E0910 23:23:07.134596 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.134824 kubelet[2678]: E0910 23:23:07.134811 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.134953 kubelet[2678]: W0910 23:23:07.134867 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.134953 kubelet[2678]: E0910 23:23:07.134880 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.135079 kubelet[2678]: E0910 23:23:07.135066 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.135128 kubelet[2678]: W0910 23:23:07.135118 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.135294 kubelet[2678]: E0910 23:23:07.135203 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.135399 kubelet[2678]: E0910 23:23:07.135378 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.135555 kubelet[2678]: W0910 23:23:07.135444 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.135555 kubelet[2678]: E0910 23:23:07.135464 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.135782 kubelet[2678]: E0910 23:23:07.135767 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.135844 kubelet[2678]: W0910 23:23:07.135833 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.136182 kubelet[2678]: E0910 23:23:07.135888 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.137103 kubelet[2678]: E0910 23:23:07.136459 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.137326 kubelet[2678]: W0910 23:23:07.137181 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.137326 kubelet[2678]: E0910 23:23:07.137202 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.137528 kubelet[2678]: E0910 23:23:07.137512 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.137591 kubelet[2678]: W0910 23:23:07.137579 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.137646 kubelet[2678]: E0910 23:23:07.137633 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.138192 kubelet[2678]: E0910 23:23:07.137984 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.138192 kubelet[2678]: W0910 23:23:07.137998 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.138192 kubelet[2678]: E0910 23:23:07.138009 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.138805 kubelet[2678]: E0910 23:23:07.138438 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.138805 kubelet[2678]: W0910 23:23:07.138452 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.138805 kubelet[2678]: E0910 23:23:07.138470 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.139122 kubelet[2678]: E0910 23:23:07.139007 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.139122 kubelet[2678]: W0910 23:23:07.139020 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.139122 kubelet[2678]: E0910 23:23:07.139032 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.139801 kubelet[2678]: E0910 23:23:07.139437 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.139801 kubelet[2678]: W0910 23:23:07.139598 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.139801 kubelet[2678]: E0910 23:23:07.139615 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.140339 kubelet[2678]: E0910 23:23:07.140323 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.140432 kubelet[2678]: W0910 23:23:07.140419 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.140507 kubelet[2678]: E0910 23:23:07.140494 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.141179 kubelet[2678]: E0910 23:23:07.140979 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.141179 kubelet[2678]: W0910 23:23:07.140994 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.141179 kubelet[2678]: E0910 23:23:07.141005 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.141945 kubelet[2678]: E0910 23:23:07.141912 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.141945 kubelet[2678]: W0910 23:23:07.141930 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.141945 kubelet[2678]: E0910 23:23:07.141944 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.142223 kubelet[2678]: E0910 23:23:07.142204 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.142223 kubelet[2678]: W0910 23:23:07.142218 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.142283 kubelet[2678]: E0910 23:23:07.142228 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.143459 kubelet[2678]: E0910 23:23:07.143435 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.143459 kubelet[2678]: W0910 23:23:07.143453 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.143459 kubelet[2678]: E0910 23:23:07.143466 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.143724 kubelet[2678]: E0910 23:23:07.143654 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.143724 kubelet[2678]: W0910 23:23:07.143669 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.143724 kubelet[2678]: E0910 23:23:07.143693 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.144163 kubelet[2678]: E0910 23:23:07.143854 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.144163 kubelet[2678]: W0910 23:23:07.143862 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.144163 kubelet[2678]: E0910 23:23:07.143870 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.144163 kubelet[2678]: E0910 23:23:07.144003 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.144163 kubelet[2678]: W0910 23:23:07.144010 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.144163 kubelet[2678]: E0910 23:23:07.144017 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.144163 kubelet[2678]: E0910 23:23:07.144167 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.144301 kubelet[2678]: W0910 23:23:07.144176 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.144301 kubelet[2678]: E0910 23:23:07.144184 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.144346 kubelet[2678]: E0910 23:23:07.144327 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.144346 kubelet[2678]: W0910 23:23:07.144335 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.144346 kubelet[2678]: E0910 23:23:07.144342 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.144840 kubelet[2678]: E0910 23:23:07.144816 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.144840 kubelet[2678]: W0910 23:23:07.144834 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.145087 kubelet[2678]: E0910 23:23:07.145046 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.146873 kubelet[2678]: E0910 23:23:07.146831 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.146873 kubelet[2678]: W0910 23:23:07.146852 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.146873 kubelet[2678]: E0910 23:23:07.146879 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.147275 kubelet[2678]: E0910 23:23:07.147048 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.147275 kubelet[2678]: W0910 23:23:07.147056 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.147275 kubelet[2678]: E0910 23:23:07.147064 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.147275 kubelet[2678]: E0910 23:23:07.147248 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.147275 kubelet[2678]: W0910 23:23:07.147257 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.147275 kubelet[2678]: E0910 23:23:07.147266 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.147552 kubelet[2678]: E0910 23:23:07.147525 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.147552 kubelet[2678]: W0910 23:23:07.147543 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.147615 kubelet[2678]: E0910 23:23:07.147556 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.148120 kubelet[2678]: E0910 23:23:07.148099 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.148210 kubelet[2678]: W0910 23:23:07.148113 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.148210 kubelet[2678]: E0910 23:23:07.148150 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.148334 kubelet[2678]: E0910 23:23:07.148315 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.148334 kubelet[2678]: W0910 23:23:07.148327 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.148402 kubelet[2678]: E0910 23:23:07.148335 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.152466 kubelet[2678]: E0910 23:23:07.152437 2678 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 10 23:23:07.152466 kubelet[2678]: W0910 23:23:07.152463 2678 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 10 23:23:07.152558 kubelet[2678]: E0910 23:23:07.152482 2678 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 10 23:23:07.971883 containerd[1527]: time="2025-09-10T23:23:07.971840527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:07.979190 containerd[1527]: time="2025-09-10T23:23:07.973074779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 10 23:23:07.979285 containerd[1527]: time="2025-09-10T23:23:07.974811879Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:07.979340 containerd[1527]: time="2025-09-10T23:23:07.978200490Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 1.049809838s" Sep 10 23:23:07.979369 containerd[1527]: time="2025-09-10T23:23:07.979346073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 10 23:23:07.980589 containerd[1527]: time="2025-09-10T23:23:07.980263059Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:07.983426 containerd[1527]: time="2025-09-10T23:23:07.983401866Z" level=info msg="CreateContainer within sandbox \"16c9e2c357efa901e020396bf1b95d7c65ba9f53bb50ed406b68c275eb85b3f4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 10 23:23:07.997795 containerd[1527]: time="2025-09-10T23:23:07.997587402Z" level=info msg="Container caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:08.019924 containerd[1527]: time="2025-09-10T23:23:08.019795499Z" level=info msg="CreateContainer within sandbox \"16c9e2c357efa901e020396bf1b95d7c65ba9f53bb50ed406b68c275eb85b3f4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968\"" Sep 10 23:23:08.021173 containerd[1527]: time="2025-09-10T23:23:08.020777973Z" level=info msg="StartContainer for \"caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968\"" Sep 10 23:23:08.022365 containerd[1527]: time="2025-09-10T23:23:08.022339072Z" level=info msg="connecting to shim caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968" address="unix:///run/containerd/s/6863164376a8807cf51fb12ffe7b52d1c57c5e60367f9adf6d57f48a59078091" protocol=ttrpc version=3 Sep 10 23:23:08.044401 systemd[1]: Started cri-containerd-caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968.scope - libcontainer container caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968. Sep 10 23:23:08.088902 systemd[1]: cri-containerd-caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968.scope: Deactivated successfully. Sep 10 23:23:08.092608 containerd[1527]: time="2025-09-10T23:23:08.092577011Z" level=info msg="StartContainer for \"caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968\" returns successfully" Sep 10 23:23:08.101066 kubelet[2678]: I0910 23:23:08.100946 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:08.104637 containerd[1527]: time="2025-09-10T23:23:08.104508386Z" level=info msg="received exit event container_id:\"caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968\" id:\"caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968\" pid:3381 exited_at:{seconds:1757546588 nanos:99473856}" Sep 10 23:23:08.104637 containerd[1527]: time="2025-09-10T23:23:08.104601456Z" level=info msg="TaskExit event in podsandbox handler container_id:\"caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968\" id:\"caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968\" pid:3381 exited_at:{seconds:1757546588 nanos:99473856}" Sep 10 23:23:08.151857 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-caca3695ffb1551c279edbb4cd2f1f8a137ecdfd7fecff809d285496c0dda968-rootfs.mount: Deactivated successfully. Sep 10 23:23:09.005258 kubelet[2678]: E0910 23:23:09.004890 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bbr5" podUID="41e05e41-b646-4243-9c89-ac4eb4228756" Sep 10 23:23:09.103063 containerd[1527]: time="2025-09-10T23:23:09.103012146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 10 23:23:10.780291 containerd[1527]: time="2025-09-10T23:23:10.780245504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:10.781122 containerd[1527]: time="2025-09-10T23:23:10.781016931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 10 23:23:10.782484 containerd[1527]: time="2025-09-10T23:23:10.782363087Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:10.785032 containerd[1527]: time="2025-09-10T23:23:10.784924841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:10.785875 containerd[1527]: time="2025-09-10T23:23:10.785845271Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 1.682788311s" Sep 10 23:23:10.786069 containerd[1527]: time="2025-09-10T23:23:10.786006119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 10 23:23:10.795620 containerd[1527]: time="2025-09-10T23:23:10.795582016Z" level=info msg="CreateContainer within sandbox \"16c9e2c357efa901e020396bf1b95d7c65ba9f53bb50ed406b68c275eb85b3f4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 10 23:23:10.810119 containerd[1527]: time="2025-09-10T23:23:10.810083003Z" level=info msg="Container 3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:10.820063 containerd[1527]: time="2025-09-10T23:23:10.820028650Z" level=info msg="CreateContainer within sandbox \"16c9e2c357efa901e020396bf1b95d7c65ba9f53bb50ed406b68c275eb85b3f4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f\"" Sep 10 23:23:10.821315 containerd[1527]: time="2025-09-10T23:23:10.821291781Z" level=info msg="StartContainer for \"3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f\"" Sep 10 23:23:10.824986 containerd[1527]: time="2025-09-10T23:23:10.824912847Z" level=info msg="connecting to shim 3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f" address="unix:///run/containerd/s/6863164376a8807cf51fb12ffe7b52d1c57c5e60367f9adf6d57f48a59078091" protocol=ttrpc version=3 Sep 10 23:23:10.847321 systemd[1]: Started cri-containerd-3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f.scope - libcontainer container 3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f. Sep 10 23:23:10.887942 containerd[1527]: time="2025-09-10T23:23:10.887895178Z" level=info msg="StartContainer for \"3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f\" returns successfully" Sep 10 23:23:11.004793 kubelet[2678]: E0910 23:23:11.004744 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-5bbr5" podUID="41e05e41-b646-4243-9c89-ac4eb4228756" Sep 10 23:23:11.436467 containerd[1527]: time="2025-09-10T23:23:11.436410644Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 23:23:11.438486 systemd[1]: cri-containerd-3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f.scope: Deactivated successfully. Sep 10 23:23:11.438857 systemd[1]: cri-containerd-3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f.scope: Consumed 468ms CPU time, 175.8M memory peak, 1.2M read from disk, 165.8M written to disk. Sep 10 23:23:11.443559 containerd[1527]: time="2025-09-10T23:23:11.443518413Z" level=info msg="received exit event container_id:\"3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f\" id:\"3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f\" pid:3439 exited_at:{seconds:1757546591 nanos:443303752}" Sep 10 23:23:11.443771 containerd[1527]: time="2025-09-10T23:23:11.443735274Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f\" id:\"3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f\" pid:3439 exited_at:{seconds:1757546591 nanos:443303752}" Sep 10 23:23:11.460654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e58e692eb3d161a7ce9cb8ee4cae0aa6dd328486f1fbd96886d101671631b0f-rootfs.mount: Deactivated successfully. Sep 10 23:23:11.480062 kubelet[2678]: I0910 23:23:11.480029 2678 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 23:23:11.643032 systemd[1]: Created slice kubepods-burstable-pod067c0573_11b6_4149_ab4e_f6083aab0f6f.slice - libcontainer container kubepods-burstable-pod067c0573_11b6_4149_ab4e_f6083aab0f6f.slice. Sep 10 23:23:11.656171 systemd[1]: Created slice kubepods-besteffort-pod0b19c13c_6d14_49c5_a626_4e3c52eb3382.slice - libcontainer container kubepods-besteffort-pod0b19c13c_6d14_49c5_a626_4e3c52eb3382.slice. Sep 10 23:23:11.665428 systemd[1]: Created slice kubepods-besteffort-pod15b1b1a9_9293_4e1d_8c21_5986b0198355.slice - libcontainer container kubepods-besteffort-pod15b1b1a9_9293_4e1d_8c21_5986b0198355.slice. Sep 10 23:23:11.677595 kubelet[2678]: I0910 23:23:11.677528 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01a270a8-5eb9-47d7-82a0-d986ee861f20-tigera-ca-bundle\") pod \"calico-kube-controllers-57b95bdd5f-cfktv\" (UID: \"01a270a8-5eb9-47d7-82a0-d986ee861f20\") " pod="calico-system/calico-kube-controllers-57b95bdd5f-cfktv" Sep 10 23:23:11.677595 kubelet[2678]: I0910 23:23:11.677601 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qcpn\" (UniqueName: \"kubernetes.io/projected/067c0573-11b6-4149-ab4e-f6083aab0f6f-kube-api-access-8qcpn\") pod \"coredns-674b8bbfcf-t9dwv\" (UID: \"067c0573-11b6-4149-ab4e-f6083aab0f6f\") " pod="kube-system/coredns-674b8bbfcf-t9dwv" Sep 10 23:23:11.677771 kubelet[2678]: I0910 23:23:11.677641 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww9t7\" (UniqueName: \"kubernetes.io/projected/01a270a8-5eb9-47d7-82a0-d986ee861f20-kube-api-access-ww9t7\") pod \"calico-kube-controllers-57b95bdd5f-cfktv\" (UID: \"01a270a8-5eb9-47d7-82a0-d986ee861f20\") " pod="calico-system/calico-kube-controllers-57b95bdd5f-cfktv" Sep 10 23:23:11.677771 kubelet[2678]: I0910 23:23:11.677660 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e9b8687-e618-4f3f-b612-90e366865f02-config\") pod \"goldmane-54d579b49d-gh44n\" (UID: \"6e9b8687-e618-4f3f-b612-90e366865f02\") " pod="calico-system/goldmane-54d579b49d-gh44n" Sep 10 23:23:11.677771 kubelet[2678]: I0910 23:23:11.677683 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b19c13c-6d14-49c5-a626-4e3c52eb3382-whisker-ca-bundle\") pod \"whisker-7d456785fc-ztznq\" (UID: \"0b19c13c-6d14-49c5-a626-4e3c52eb3382\") " pod="calico-system/whisker-7d456785fc-ztznq" Sep 10 23:23:11.677771 kubelet[2678]: I0910 23:23:11.677704 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvr8h\" (UniqueName: \"kubernetes.io/projected/6e9b8687-e618-4f3f-b612-90e366865f02-kube-api-access-rvr8h\") pod \"goldmane-54d579b49d-gh44n\" (UID: \"6e9b8687-e618-4f3f-b612-90e366865f02\") " pod="calico-system/goldmane-54d579b49d-gh44n" Sep 10 23:23:11.677771 kubelet[2678]: I0910 23:23:11.677729 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b19c13c-6d14-49c5-a626-4e3c52eb3382-whisker-backend-key-pair\") pod \"whisker-7d456785fc-ztznq\" (UID: \"0b19c13c-6d14-49c5-a626-4e3c52eb3382\") " pod="calico-system/whisker-7d456785fc-ztznq" Sep 10 23:23:11.677995 kubelet[2678]: I0910 23:23:11.677751 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7vtw\" (UniqueName: \"kubernetes.io/projected/b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0-kube-api-access-x7vtw\") pod \"calico-apiserver-577f7d6b4-whjln\" (UID: \"b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0\") " pod="calico-apiserver/calico-apiserver-577f7d6b4-whjln" Sep 10 23:23:11.677995 kubelet[2678]: I0910 23:23:11.677786 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0-calico-apiserver-certs\") pod \"calico-apiserver-577f7d6b4-whjln\" (UID: \"b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0\") " pod="calico-apiserver/calico-apiserver-577f7d6b4-whjln" Sep 10 23:23:11.677995 kubelet[2678]: I0910 23:23:11.677809 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6e9b8687-e618-4f3f-b612-90e366865f02-goldmane-key-pair\") pod \"goldmane-54d579b49d-gh44n\" (UID: \"6e9b8687-e618-4f3f-b612-90e366865f02\") " pod="calico-system/goldmane-54d579b49d-gh44n" Sep 10 23:23:11.677995 kubelet[2678]: I0910 23:23:11.677868 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/769543eb-ef20-4f8e-9322-92751538eec2-calico-apiserver-certs\") pod \"calico-apiserver-67f6d9bb89-twvpl\" (UID: \"769543eb-ef20-4f8e-9322-92751538eec2\") " pod="calico-apiserver/calico-apiserver-67f6d9bb89-twvpl" Sep 10 23:23:11.677995 kubelet[2678]: I0910 23:23:11.677897 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcxfx\" (UniqueName: \"kubernetes.io/projected/769543eb-ef20-4f8e-9322-92751538eec2-kube-api-access-rcxfx\") pod \"calico-apiserver-67f6d9bb89-twvpl\" (UID: \"769543eb-ef20-4f8e-9322-92751538eec2\") " pod="calico-apiserver/calico-apiserver-67f6d9bb89-twvpl" Sep 10 23:23:11.678108 kubelet[2678]: I0910 23:23:11.677918 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/067c0573-11b6-4149-ab4e-f6083aab0f6f-config-volume\") pod \"coredns-674b8bbfcf-t9dwv\" (UID: \"067c0573-11b6-4149-ab4e-f6083aab0f6f\") " pod="kube-system/coredns-674b8bbfcf-t9dwv" Sep 10 23:23:11.678108 kubelet[2678]: I0910 23:23:11.677938 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/15b1b1a9-9293-4e1d-8c21-5986b0198355-calico-apiserver-certs\") pod \"calico-apiserver-67f6d9bb89-rw98g\" (UID: \"15b1b1a9-9293-4e1d-8c21-5986b0198355\") " pod="calico-apiserver/calico-apiserver-67f6d9bb89-rw98g" Sep 10 23:23:11.678108 kubelet[2678]: I0910 23:23:11.677956 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bqlr6\" (UniqueName: \"kubernetes.io/projected/15b1b1a9-9293-4e1d-8c21-5986b0198355-kube-api-access-bqlr6\") pod \"calico-apiserver-67f6d9bb89-rw98g\" (UID: \"15b1b1a9-9293-4e1d-8c21-5986b0198355\") " pod="calico-apiserver/calico-apiserver-67f6d9bb89-rw98g" Sep 10 23:23:11.678108 kubelet[2678]: I0910 23:23:11.677982 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxcxj\" (UniqueName: \"kubernetes.io/projected/0b19c13c-6d14-49c5-a626-4e3c52eb3382-kube-api-access-hxcxj\") pod \"whisker-7d456785fc-ztznq\" (UID: \"0b19c13c-6d14-49c5-a626-4e3c52eb3382\") " pod="calico-system/whisker-7d456785fc-ztznq" Sep 10 23:23:11.678108 kubelet[2678]: I0910 23:23:11.678010 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e9b8687-e618-4f3f-b612-90e366865f02-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-gh44n\" (UID: \"6e9b8687-e618-4f3f-b612-90e366865f02\") " pod="calico-system/goldmane-54d579b49d-gh44n" Sep 10 23:23:11.678677 systemd[1]: Created slice kubepods-besteffort-pod01a270a8_5eb9_47d7_82a0_d986ee861f20.slice - libcontainer container kubepods-besteffort-pod01a270a8_5eb9_47d7_82a0_d986ee861f20.slice. Sep 10 23:23:11.686030 systemd[1]: Created slice kubepods-besteffort-pod769543eb_ef20_4f8e_9322_92751538eec2.slice - libcontainer container kubepods-besteffort-pod769543eb_ef20_4f8e_9322_92751538eec2.slice. Sep 10 23:23:11.693486 systemd[1]: Created slice kubepods-besteffort-pod6e9b8687_e618_4f3f_b612_90e366865f02.slice - libcontainer container kubepods-besteffort-pod6e9b8687_e618_4f3f_b612_90e366865f02.slice. Sep 10 23:23:11.699624 systemd[1]: Created slice kubepods-besteffort-podb5586149_7cf4_4fb0_87d1_9bd24bd8e6f0.slice - libcontainer container kubepods-besteffort-podb5586149_7cf4_4fb0_87d1_9bd24bd8e6f0.slice. Sep 10 23:23:11.704897 systemd[1]: Created slice kubepods-burstable-pod81ee5a93_1f6d_400e_9d58_87b6a038fce5.slice - libcontainer container kubepods-burstable-pod81ee5a93_1f6d_400e_9d58_87b6a038fce5.slice. Sep 10 23:23:11.778990 kubelet[2678]: I0910 23:23:11.778934 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81ee5a93-1f6d-400e-9d58-87b6a038fce5-config-volume\") pod \"coredns-674b8bbfcf-2vxr4\" (UID: \"81ee5a93-1f6d-400e-9d58-87b6a038fce5\") " pod="kube-system/coredns-674b8bbfcf-2vxr4" Sep 10 23:23:11.778990 kubelet[2678]: I0910 23:23:11.778983 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ttvd\" (UniqueName: \"kubernetes.io/projected/81ee5a93-1f6d-400e-9d58-87b6a038fce5-kube-api-access-8ttvd\") pod \"coredns-674b8bbfcf-2vxr4\" (UID: \"81ee5a93-1f6d-400e-9d58-87b6a038fce5\") " pod="kube-system/coredns-674b8bbfcf-2vxr4" Sep 10 23:23:11.951418 containerd[1527]: time="2025-09-10T23:23:11.951296143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9dwv,Uid:067c0573-11b6-4149-ab4e-f6083aab0f6f,Namespace:kube-system,Attempt:0,}" Sep 10 23:23:11.964714 containerd[1527]: time="2025-09-10T23:23:11.964671684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d456785fc-ztznq,Uid:0b19c13c-6d14-49c5-a626-4e3c52eb3382,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:11.973988 containerd[1527]: time="2025-09-10T23:23:11.973736886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6d9bb89-rw98g,Uid:15b1b1a9-9293-4e1d-8c21-5986b0198355,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:11.983421 containerd[1527]: time="2025-09-10T23:23:11.983366488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b95bdd5f-cfktv,Uid:01a270a8-5eb9-47d7-82a0-d986ee861f20,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:11.991738 containerd[1527]: time="2025-09-10T23:23:11.991685639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6d9bb89-twvpl,Uid:769543eb-ef20-4f8e-9322-92751538eec2,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:12.000913 containerd[1527]: time="2025-09-10T23:23:12.000864194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gh44n,Uid:6e9b8687-e618-4f3f-b612-90e366865f02,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:12.004536 containerd[1527]: time="2025-09-10T23:23:12.004500185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f7d6b4-whjln,Uid:b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:12.009347 containerd[1527]: time="2025-09-10T23:23:12.009315494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vxr4,Uid:81ee5a93-1f6d-400e-9d58-87b6a038fce5,Namespace:kube-system,Attempt:0,}" Sep 10 23:23:12.095295 containerd[1527]: time="2025-09-10T23:23:12.095093129Z" level=error msg="Failed to destroy network for sandbox \"d9fddd769e68351a0f212374b4e4ee1b993253578a375bb380c5ac81e20f98d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.097111 containerd[1527]: time="2025-09-10T23:23:12.097007290Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6d9bb89-twvpl,Uid:769543eb-ef20-4f8e-9322-92751538eec2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9fddd769e68351a0f212374b4e4ee1b993253578a375bb380c5ac81e20f98d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.105712 kubelet[2678]: E0910 23:23:12.105648 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9fddd769e68351a0f212374b4e4ee1b993253578a375bb380c5ac81e20f98d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.106107 kubelet[2678]: E0910 23:23:12.105749 2678 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9fddd769e68351a0f212374b4e4ee1b993253578a375bb380c5ac81e20f98d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f6d9bb89-twvpl" Sep 10 23:23:12.108314 containerd[1527]: time="2025-09-10T23:23:12.108274152Z" level=error msg="Failed to destroy network for sandbox \"37b8b1ad574ff55e551ac719a8ee6a218b1d7c5f2790a2748bf4c8f59c8d8e45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.109384 kubelet[2678]: E0910 23:23:12.109313 2678 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d9fddd769e68351a0f212374b4e4ee1b993253578a375bb380c5ac81e20f98d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f6d9bb89-twvpl" Sep 10 23:23:12.109478 kubelet[2678]: E0910 23:23:12.109421 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67f6d9bb89-twvpl_calico-apiserver(769543eb-ef20-4f8e-9322-92751538eec2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67f6d9bb89-twvpl_calico-apiserver(769543eb-ef20-4f8e-9322-92751538eec2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d9fddd769e68351a0f212374b4e4ee1b993253578a375bb380c5ac81e20f98d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f6d9bb89-twvpl" podUID="769543eb-ef20-4f8e-9322-92751538eec2" Sep 10 23:23:12.109541 containerd[1527]: time="2025-09-10T23:23:12.109434508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f7d6b4-whjln,Uid:b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"37b8b1ad574ff55e551ac719a8ee6a218b1d7c5f2790a2748bf4c8f59c8d8e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.109730 kubelet[2678]: E0910 23:23:12.109631 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37b8b1ad574ff55e551ac719a8ee6a218b1d7c5f2790a2748bf4c8f59c8d8e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.109730 kubelet[2678]: E0910 23:23:12.109680 2678 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37b8b1ad574ff55e551ac719a8ee6a218b1d7c5f2790a2748bf4c8f59c8d8e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577f7d6b4-whjln" Sep 10 23:23:12.109730 kubelet[2678]: E0910 23:23:12.109699 2678 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"37b8b1ad574ff55e551ac719a8ee6a218b1d7c5f2790a2748bf4c8f59c8d8e45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-577f7d6b4-whjln" Sep 10 23:23:12.109824 kubelet[2678]: E0910 23:23:12.109732 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-577f7d6b4-whjln_calico-apiserver(b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-577f7d6b4-whjln_calico-apiserver(b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"37b8b1ad574ff55e551ac719a8ee6a218b1d7c5f2790a2748bf4c8f59c8d8e45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-577f7d6b4-whjln" podUID="b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0" Sep 10 23:23:12.119989 containerd[1527]: time="2025-09-10T23:23:12.119812288Z" level=error msg="Failed to destroy network for sandbox \"33454ae5e0cbb56a77f380f905a51a949b3d8af94a817b2a928d50a97d0b9e8b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.121542 containerd[1527]: time="2025-09-10T23:23:12.121360629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 10 23:23:12.121871 containerd[1527]: time="2025-09-10T23:23:12.120984647Z" level=error msg="Failed to destroy network for sandbox \"969cc85b924479cee74f53d03a034a329b98b1bf5e7b47dd858f495f4dd6656f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.122487 containerd[1527]: time="2025-09-10T23:23:12.122450325Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6d9bb89-rw98g,Uid:15b1b1a9-9293-4e1d-8c21-5986b0198355,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33454ae5e0cbb56a77f380f905a51a949b3d8af94a817b2a928d50a97d0b9e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.123975 containerd[1527]: time="2025-09-10T23:23:12.123935049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vxr4,Uid:81ee5a93-1f6d-400e-9d58-87b6a038fce5,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"969cc85b924479cee74f53d03a034a329b98b1bf5e7b47dd858f495f4dd6656f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.124407 kubelet[2678]: E0910 23:23:12.124371 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33454ae5e0cbb56a77f380f905a51a949b3d8af94a817b2a928d50a97d0b9e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.124480 kubelet[2678]: E0910 23:23:12.124425 2678 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33454ae5e0cbb56a77f380f905a51a949b3d8af94a817b2a928d50a97d0b9e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f6d9bb89-rw98g" Sep 10 23:23:12.124480 kubelet[2678]: E0910 23:23:12.124445 2678 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33454ae5e0cbb56a77f380f905a51a949b3d8af94a817b2a928d50a97d0b9e8b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-67f6d9bb89-rw98g" Sep 10 23:23:12.124532 kubelet[2678]: E0910 23:23:12.124491 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-67f6d9bb89-rw98g_calico-apiserver(15b1b1a9-9293-4e1d-8c21-5986b0198355)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-67f6d9bb89-rw98g_calico-apiserver(15b1b1a9-9293-4e1d-8c21-5986b0198355)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33454ae5e0cbb56a77f380f905a51a949b3d8af94a817b2a928d50a97d0b9e8b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-67f6d9bb89-rw98g" podUID="15b1b1a9-9293-4e1d-8c21-5986b0198355" Sep 10 23:23:12.124763 kubelet[2678]: E0910 23:23:12.124721 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"969cc85b924479cee74f53d03a034a329b98b1bf5e7b47dd858f495f4dd6656f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.124815 kubelet[2678]: E0910 23:23:12.124766 2678 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"969cc85b924479cee74f53d03a034a329b98b1bf5e7b47dd858f495f4dd6656f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2vxr4" Sep 10 23:23:12.124815 kubelet[2678]: E0910 23:23:12.124785 2678 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"969cc85b924479cee74f53d03a034a329b98b1bf5e7b47dd858f495f4dd6656f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2vxr4" Sep 10 23:23:12.124866 kubelet[2678]: E0910 23:23:12.124817 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2vxr4_kube-system(81ee5a93-1f6d-400e-9d58-87b6a038fce5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2vxr4_kube-system(81ee5a93-1f6d-400e-9d58-87b6a038fce5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"969cc85b924479cee74f53d03a034a329b98b1bf5e7b47dd858f495f4dd6656f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2vxr4" podUID="81ee5a93-1f6d-400e-9d58-87b6a038fce5" Sep 10 23:23:12.129992 containerd[1527]: time="2025-09-10T23:23:12.129937400Z" level=error msg="Failed to destroy network for sandbox \"7a9c7cd5c25f7a73312e7075034edad16d97f3e83fc1f03ff61677513e00755a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.131317 containerd[1527]: time="2025-09-10T23:23:12.131284207Z" level=error msg="Failed to destroy network for sandbox \"453ed844f4cda882524b91fa5e6f350e87b144c993d052643e0a5810a584c210\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.132218 containerd[1527]: time="2025-09-10T23:23:12.132020647Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b95bdd5f-cfktv,Uid:01a270a8-5eb9-47d7-82a0-d986ee861f20,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9c7cd5c25f7a73312e7075034edad16d97f3e83fc1f03ff61677513e00755a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.132394 kubelet[2678]: E0910 23:23:12.132246 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9c7cd5c25f7a73312e7075034edad16d97f3e83fc1f03ff61677513e00755a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.132394 kubelet[2678]: E0910 23:23:12.132344 2678 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9c7cd5c25f7a73312e7075034edad16d97f3e83fc1f03ff61677513e00755a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b95bdd5f-cfktv" Sep 10 23:23:12.132394 kubelet[2678]: E0910 23:23:12.132364 2678 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a9c7cd5c25f7a73312e7075034edad16d97f3e83fc1f03ff61677513e00755a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57b95bdd5f-cfktv" Sep 10 23:23:12.132483 kubelet[2678]: E0910 23:23:12.132409 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57b95bdd5f-cfktv_calico-system(01a270a8-5eb9-47d7-82a0-d986ee861f20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57b95bdd5f-cfktv_calico-system(01a270a8-5eb9-47d7-82a0-d986ee861f20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a9c7cd5c25f7a73312e7075034edad16d97f3e83fc1f03ff61677513e00755a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57b95bdd5f-cfktv" podUID="01a270a8-5eb9-47d7-82a0-d986ee861f20" Sep 10 23:23:12.133569 containerd[1527]: time="2025-09-10T23:23:12.132770451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gh44n,Uid:6e9b8687-e618-4f3f-b612-90e366865f02,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"453ed844f4cda882524b91fa5e6f350e87b144c993d052643e0a5810a584c210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.133925 kubelet[2678]: E0910 23:23:12.132958 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"453ed844f4cda882524b91fa5e6f350e87b144c993d052643e0a5810a584c210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.133925 kubelet[2678]: E0910 23:23:12.133168 2678 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"453ed844f4cda882524b91fa5e6f350e87b144c993d052643e0a5810a584c210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-gh44n" Sep 10 23:23:12.133925 kubelet[2678]: E0910 23:23:12.133191 2678 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"453ed844f4cda882524b91fa5e6f350e87b144c993d052643e0a5810a584c210\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-gh44n" Sep 10 23:23:12.134018 kubelet[2678]: E0910 23:23:12.133253 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-gh44n_calico-system(6e9b8687-e618-4f3f-b612-90e366865f02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-gh44n_calico-system(6e9b8687-e618-4f3f-b612-90e366865f02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"453ed844f4cda882524b91fa5e6f350e87b144c993d052643e0a5810a584c210\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-gh44n" podUID="6e9b8687-e618-4f3f-b612-90e366865f02" Sep 10 23:23:12.140086 containerd[1527]: time="2025-09-10T23:23:12.140009858Z" level=error msg="Failed to destroy network for sandbox \"aef610c0ca1d5a4069372ebd8c7156e7a826cc29254c22f648a381d2aaf983fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.141086 containerd[1527]: time="2025-09-10T23:23:12.141039938Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9dwv,Uid:067c0573-11b6-4149-ab4e-f6083aab0f6f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aef610c0ca1d5a4069372ebd8c7156e7a826cc29254c22f648a381d2aaf983fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.141416 kubelet[2678]: E0910 23:23:12.141356 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aef610c0ca1d5a4069372ebd8c7156e7a826cc29254c22f648a381d2aaf983fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.141464 kubelet[2678]: E0910 23:23:12.141432 2678 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aef610c0ca1d5a4069372ebd8c7156e7a826cc29254c22f648a381d2aaf983fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t9dwv" Sep 10 23:23:12.141464 kubelet[2678]: E0910 23:23:12.141453 2678 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aef610c0ca1d5a4069372ebd8c7156e7a826cc29254c22f648a381d2aaf983fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t9dwv" Sep 10 23:23:12.141518 kubelet[2678]: E0910 23:23:12.141498 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t9dwv_kube-system(067c0573-11b6-4149-ab4e-f6083aab0f6f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t9dwv_kube-system(067c0573-11b6-4149-ab4e-f6083aab0f6f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aef610c0ca1d5a4069372ebd8c7156e7a826cc29254c22f648a381d2aaf983fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t9dwv" podUID="067c0573-11b6-4149-ab4e-f6083aab0f6f" Sep 10 23:23:12.144815 containerd[1527]: time="2025-09-10T23:23:12.144761150Z" level=error msg="Failed to destroy network for sandbox \"68110515f73b1d1afc395b2e0909be6e4bd1f6c066a85290f8984fff261bda16\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.145860 containerd[1527]: time="2025-09-10T23:23:12.145821198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d456785fc-ztznq,Uid:0b19c13c-6d14-49c5-a626-4e3c52eb3382,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68110515f73b1d1afc395b2e0909be6e4bd1f6c066a85290f8984fff261bda16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.146066 kubelet[2678]: E0910 23:23:12.145999 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68110515f73b1d1afc395b2e0909be6e4bd1f6c066a85290f8984fff261bda16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:12.146101 kubelet[2678]: E0910 23:23:12.146082 2678 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68110515f73b1d1afc395b2e0909be6e4bd1f6c066a85290f8984fff261bda16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d456785fc-ztznq" Sep 10 23:23:12.146123 kubelet[2678]: E0910 23:23:12.146101 2678 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68110515f73b1d1afc395b2e0909be6e4bd1f6c066a85290f8984fff261bda16\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d456785fc-ztznq" Sep 10 23:23:12.146222 kubelet[2678]: E0910 23:23:12.146196 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d456785fc-ztznq_calico-system(0b19c13c-6d14-49c5-a626-4e3c52eb3382)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d456785fc-ztznq_calico-system(0b19c13c-6d14-49c5-a626-4e3c52eb3382)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68110515f73b1d1afc395b2e0909be6e4bd1f6c066a85290f8984fff261bda16\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d456785fc-ztznq" podUID="0b19c13c-6d14-49c5-a626-4e3c52eb3382" Sep 10 23:23:12.811945 systemd[1]: run-netns-cni\x2d1b81d71d\x2db1a8\x2dcc6d\x2da01a\x2d1c2006a9fddc.mount: Deactivated successfully. Sep 10 23:23:12.812246 systemd[1]: run-netns-cni\x2d3cca5733\x2de52b\x2da007\x2df878\x2d150bb0bc3524.mount: Deactivated successfully. Sep 10 23:23:12.812301 systemd[1]: run-netns-cni\x2d4cd9e9a9\x2d0f98\x2dfceb\x2ddf93\x2de2beb7efde51.mount: Deactivated successfully. Sep 10 23:23:13.019895 systemd[1]: Created slice kubepods-besteffort-pod41e05e41_b646_4243_9c89_ac4eb4228756.slice - libcontainer container kubepods-besteffort-pod41e05e41_b646_4243_9c89_ac4eb4228756.slice. Sep 10 23:23:13.023013 containerd[1527]: time="2025-09-10T23:23:13.022979637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bbr5,Uid:41e05e41-b646-4243-9c89-ac4eb4228756,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:13.093347 containerd[1527]: time="2025-09-10T23:23:13.093238820Z" level=error msg="Failed to destroy network for sandbox \"e8f4d6e77c9adb023086c8514cd95c7f41c45188d883b0e860fe7d542fa218a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:13.095984 systemd[1]: run-netns-cni\x2d263209df\x2d9b71\x2da2fe\x2da658\x2de8dca93d5a1e.mount: Deactivated successfully. Sep 10 23:23:13.156951 containerd[1527]: time="2025-09-10T23:23:13.156894915Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bbr5,Uid:41e05e41-b646-4243-9c89-ac4eb4228756,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8f4d6e77c9adb023086c8514cd95c7f41c45188d883b0e860fe7d542fa218a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:13.157429 kubelet[2678]: E0910 23:23:13.157124 2678 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8f4d6e77c9adb023086c8514cd95c7f41c45188d883b0e860fe7d542fa218a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 10 23:23:13.157429 kubelet[2678]: E0910 23:23:13.157358 2678 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8f4d6e77c9adb023086c8514cd95c7f41c45188d883b0e860fe7d542fa218a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5bbr5" Sep 10 23:23:13.157429 kubelet[2678]: E0910 23:23:13.157384 2678 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8f4d6e77c9adb023086c8514cd95c7f41c45188d883b0e860fe7d542fa218a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-5bbr5" Sep 10 23:23:13.158090 kubelet[2678]: E0910 23:23:13.157804 2678 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-5bbr5_calico-system(41e05e41-b646-4243-9c89-ac4eb4228756)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-5bbr5_calico-system(41e05e41-b646-4243-9c89-ac4eb4228756)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8f4d6e77c9adb023086c8514cd95c7f41c45188d883b0e860fe7d542fa218a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-5bbr5" podUID="41e05e41-b646-4243-9c89-ac4eb4228756" Sep 10 23:23:14.963815 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1946031343.mount: Deactivated successfully. Sep 10 23:23:15.216514 containerd[1527]: time="2025-09-10T23:23:15.216384190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:15.218248 containerd[1527]: time="2025-09-10T23:23:15.218206513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 10 23:23:15.221356 containerd[1527]: time="2025-09-10T23:23:15.221314108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 3.099914388s" Sep 10 23:23:15.221356 containerd[1527]: time="2025-09-10T23:23:15.221354878Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 10 23:23:15.224108 containerd[1527]: time="2025-09-10T23:23:15.224033290Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:15.224877 containerd[1527]: time="2025-09-10T23:23:15.224795355Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:15.234292 containerd[1527]: time="2025-09-10T23:23:15.234241092Z" level=info msg="CreateContainer within sandbox \"16c9e2c357efa901e020396bf1b95d7c65ba9f53bb50ed406b68c275eb85b3f4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 10 23:23:15.240926 kubelet[2678]: I0910 23:23:15.240741 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:15.243322 containerd[1527]: time="2025-09-10T23:23:15.243283170Z" level=info msg="Container 4875ef8629b196357d714d3d9ca5c8533c911f4b25f0e7a660a064dfc65017de: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:15.261562 containerd[1527]: time="2025-09-10T23:23:15.260630108Z" level=info msg="CreateContainer within sandbox \"16c9e2c357efa901e020396bf1b95d7c65ba9f53bb50ed406b68c275eb85b3f4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4875ef8629b196357d714d3d9ca5c8533c911f4b25f0e7a660a064dfc65017de\"" Sep 10 23:23:15.263631 containerd[1527]: time="2025-09-10T23:23:15.262390096Z" level=info msg="StartContainer for \"4875ef8629b196357d714d3d9ca5c8533c911f4b25f0e7a660a064dfc65017de\"" Sep 10 23:23:15.267673 containerd[1527]: time="2025-09-10T23:23:15.267639973Z" level=info msg="connecting to shim 4875ef8629b196357d714d3d9ca5c8533c911f4b25f0e7a660a064dfc65017de" address="unix:///run/containerd/s/6863164376a8807cf51fb12ffe7b52d1c57c5e60367f9adf6d57f48a59078091" protocol=ttrpc version=3 Sep 10 23:23:15.303298 systemd[1]: Started cri-containerd-4875ef8629b196357d714d3d9ca5c8533c911f4b25f0e7a660a064dfc65017de.scope - libcontainer container 4875ef8629b196357d714d3d9ca5c8533c911f4b25f0e7a660a064dfc65017de. Sep 10 23:23:15.415864 containerd[1527]: time="2025-09-10T23:23:15.415800439Z" level=info msg="StartContainer for \"4875ef8629b196357d714d3d9ca5c8533c911f4b25f0e7a660a064dfc65017de\" returns successfully" Sep 10 23:23:15.467160 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 10 23:23:15.467473 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 10 23:23:15.602033 kubelet[2678]: I0910 23:23:15.601992 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b19c13c-6d14-49c5-a626-4e3c52eb3382-whisker-ca-bundle\") pod \"0b19c13c-6d14-49c5-a626-4e3c52eb3382\" (UID: \"0b19c13c-6d14-49c5-a626-4e3c52eb3382\") " Sep 10 23:23:15.602218 kubelet[2678]: I0910 23:23:15.602038 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b19c13c-6d14-49c5-a626-4e3c52eb3382-whisker-backend-key-pair\") pod \"0b19c13c-6d14-49c5-a626-4e3c52eb3382\" (UID: \"0b19c13c-6d14-49c5-a626-4e3c52eb3382\") " Sep 10 23:23:15.602218 kubelet[2678]: I0910 23:23:15.602065 2678 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxcxj\" (UniqueName: \"kubernetes.io/projected/0b19c13c-6d14-49c5-a626-4e3c52eb3382-kube-api-access-hxcxj\") pod \"0b19c13c-6d14-49c5-a626-4e3c52eb3382\" (UID: \"0b19c13c-6d14-49c5-a626-4e3c52eb3382\") " Sep 10 23:23:15.613686 kubelet[2678]: I0910 23:23:15.613625 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b19c13c-6d14-49c5-a626-4e3c52eb3382-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0b19c13c-6d14-49c5-a626-4e3c52eb3382" (UID: "0b19c13c-6d14-49c5-a626-4e3c52eb3382"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 23:23:15.614381 kubelet[2678]: I0910 23:23:15.614341 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b19c13c-6d14-49c5-a626-4e3c52eb3382-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0b19c13c-6d14-49c5-a626-4e3c52eb3382" (UID: "0b19c13c-6d14-49c5-a626-4e3c52eb3382"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 23:23:15.614985 kubelet[2678]: I0910 23:23:15.614949 2678 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b19c13c-6d14-49c5-a626-4e3c52eb3382-kube-api-access-hxcxj" (OuterVolumeSpecName: "kube-api-access-hxcxj") pod "0b19c13c-6d14-49c5-a626-4e3c52eb3382" (UID: "0b19c13c-6d14-49c5-a626-4e3c52eb3382"). InnerVolumeSpecName "kube-api-access-hxcxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 23:23:15.702868 kubelet[2678]: I0910 23:23:15.702812 2678 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b19c13c-6d14-49c5-a626-4e3c52eb3382-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 10 23:23:15.702868 kubelet[2678]: I0910 23:23:15.702853 2678 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b19c13c-6d14-49c5-a626-4e3c52eb3382-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 10 23:23:15.702868 kubelet[2678]: I0910 23:23:15.702863 2678 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hxcxj\" (UniqueName: \"kubernetes.io/projected/0b19c13c-6d14-49c5-a626-4e3c52eb3382-kube-api-access-hxcxj\") on node \"localhost\" DevicePath \"\"" Sep 10 23:23:15.964644 systemd[1]: var-lib-kubelet-pods-0b19c13c\x2d6d14\x2d49c5\x2da626\x2d4e3c52eb3382-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhxcxj.mount: Deactivated successfully. Sep 10 23:23:15.965051 systemd[1]: var-lib-kubelet-pods-0b19c13c\x2d6d14\x2d49c5\x2da626\x2d4e3c52eb3382-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 10 23:23:16.155284 systemd[1]: Removed slice kubepods-besteffort-pod0b19c13c_6d14_49c5_a626_4e3c52eb3382.slice - libcontainer container kubepods-besteffort-pod0b19c13c_6d14_49c5_a626_4e3c52eb3382.slice. Sep 10 23:23:16.207743 kubelet[2678]: I0910 23:23:16.206545 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7hl6q" podStartSLOduration=2.430400115 podStartE2EDuration="12.206528565s" podCreationTimestamp="2025-09-10 23:23:04 +0000 UTC" firstStartedPulling="2025-09-10 23:23:05.445789125 +0000 UTC m=+20.526276821" lastFinishedPulling="2025-09-10 23:23:15.221917575 +0000 UTC m=+30.302405271" observedRunningTime="2025-09-10 23:23:16.177128063 +0000 UTC m=+31.257615719" watchObservedRunningTime="2025-09-10 23:23:16.206528565 +0000 UTC m=+31.287016261" Sep 10 23:23:16.248838 systemd[1]: Created slice kubepods-besteffort-podad562b2b_9dc9_4ca9_ae28_b6b0af06fb52.slice - libcontainer container kubepods-besteffort-podad562b2b_9dc9_4ca9_ae28_b6b0af06fb52.slice. Sep 10 23:23:16.307423 kubelet[2678]: I0910 23:23:16.307367 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ad562b2b-9dc9-4ca9-ae28-b6b0af06fb52-whisker-backend-key-pair\") pod \"whisker-58774bc7c6-6q4s4\" (UID: \"ad562b2b-9dc9-4ca9-ae28-b6b0af06fb52\") " pod="calico-system/whisker-58774bc7c6-6q4s4" Sep 10 23:23:16.307775 kubelet[2678]: I0910 23:23:16.307430 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad562b2b-9dc9-4ca9-ae28-b6b0af06fb52-whisker-ca-bundle\") pod \"whisker-58774bc7c6-6q4s4\" (UID: \"ad562b2b-9dc9-4ca9-ae28-b6b0af06fb52\") " pod="calico-system/whisker-58774bc7c6-6q4s4" Sep 10 23:23:16.307775 kubelet[2678]: I0910 23:23:16.307503 2678 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b5nv\" (UniqueName: \"kubernetes.io/projected/ad562b2b-9dc9-4ca9-ae28-b6b0af06fb52-kube-api-access-6b5nv\") pod \"whisker-58774bc7c6-6q4s4\" (UID: \"ad562b2b-9dc9-4ca9-ae28-b6b0af06fb52\") " pod="calico-system/whisker-58774bc7c6-6q4s4" Sep 10 23:23:16.552361 containerd[1527]: time="2025-09-10T23:23:16.552256173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58774bc7c6-6q4s4,Uid:ad562b2b-9dc9-4ca9-ae28-b6b0af06fb52,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:16.709636 systemd-networkd[1425]: calia97430fc7d7: Link UP Sep 10 23:23:16.710772 systemd-networkd[1425]: calia97430fc7d7: Gained carrier Sep 10 23:23:16.725199 containerd[1527]: 2025-09-10 23:23:16.572 [INFO][3845] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 10 23:23:16.725199 containerd[1527]: 2025-09-10 23:23:16.603 [INFO][3845] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--58774bc7c6--6q4s4-eth0 whisker-58774bc7c6- calico-system ad562b2b-9dc9-4ca9-ae28-b6b0af06fb52 888 0 2025-09-10 23:23:16 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:58774bc7c6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-58774bc7c6-6q4s4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia97430fc7d7 [] [] }} ContainerID="f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" Namespace="calico-system" Pod="whisker-58774bc7c6-6q4s4" WorkloadEndpoint="localhost-k8s-whisker--58774bc7c6--6q4s4-" Sep 10 23:23:16.725199 containerd[1527]: 2025-09-10 23:23:16.603 [INFO][3845] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" Namespace="calico-system" Pod="whisker-58774bc7c6-6q4s4" WorkloadEndpoint="localhost-k8s-whisker--58774bc7c6--6q4s4-eth0" Sep 10 23:23:16.725199 containerd[1527]: 2025-09-10 23:23:16.662 [INFO][3860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" HandleID="k8s-pod-network.f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" Workload="localhost-k8s-whisker--58774bc7c6--6q4s4-eth0" Sep 10 23:23:16.725397 containerd[1527]: 2025-09-10 23:23:16.662 [INFO][3860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" HandleID="k8s-pod-network.f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" Workload="localhost-k8s-whisker--58774bc7c6--6q4s4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000519c40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-58774bc7c6-6q4s4", "timestamp":"2025-09-10 23:23:16.662395071 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:16.725397 containerd[1527]: 2025-09-10 23:23:16.662 [INFO][3860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:16.725397 containerd[1527]: 2025-09-10 23:23:16.662 [INFO][3860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:16.725397 containerd[1527]: 2025-09-10 23:23:16.662 [INFO][3860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:16.725397 containerd[1527]: 2025-09-10 23:23:16.673 [INFO][3860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" host="localhost" Sep 10 23:23:16.725397 containerd[1527]: 2025-09-10 23:23:16.679 [INFO][3860] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:16.725397 containerd[1527]: 2025-09-10 23:23:16.683 [INFO][3860] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:16.725397 containerd[1527]: 2025-09-10 23:23:16.685 [INFO][3860] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:16.725397 containerd[1527]: 2025-09-10 23:23:16.689 [INFO][3860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:16.725397 containerd[1527]: 2025-09-10 23:23:16.689 [INFO][3860] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" host="localhost" Sep 10 23:23:16.725589 containerd[1527]: 2025-09-10 23:23:16.691 [INFO][3860] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3 Sep 10 23:23:16.725589 containerd[1527]: 2025-09-10 23:23:16.695 [INFO][3860] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" host="localhost" Sep 10 23:23:16.725589 containerd[1527]: 2025-09-10 23:23:16.700 [INFO][3860] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" host="localhost" Sep 10 23:23:16.725589 containerd[1527]: 2025-09-10 23:23:16.700 [INFO][3860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" host="localhost" Sep 10 23:23:16.725589 containerd[1527]: 2025-09-10 23:23:16.700 [INFO][3860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:16.725589 containerd[1527]: 2025-09-10 23:23:16.700 [INFO][3860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" HandleID="k8s-pod-network.f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" Workload="localhost-k8s-whisker--58774bc7c6--6q4s4-eth0" Sep 10 23:23:16.725694 containerd[1527]: 2025-09-10 23:23:16.703 [INFO][3845] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" Namespace="calico-system" Pod="whisker-58774bc7c6-6q4s4" WorkloadEndpoint="localhost-k8s-whisker--58774bc7c6--6q4s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--58774bc7c6--6q4s4-eth0", GenerateName:"whisker-58774bc7c6-", Namespace:"calico-system", SelfLink:"", UID:"ad562b2b-9dc9-4ca9-ae28-b6b0af06fb52", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58774bc7c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-58774bc7c6-6q4s4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia97430fc7d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:16.725694 containerd[1527]: 2025-09-10 23:23:16.703 [INFO][3845] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" Namespace="calico-system" Pod="whisker-58774bc7c6-6q4s4" WorkloadEndpoint="localhost-k8s-whisker--58774bc7c6--6q4s4-eth0" Sep 10 23:23:16.725757 containerd[1527]: 2025-09-10 23:23:16.703 [INFO][3845] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia97430fc7d7 ContainerID="f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" Namespace="calico-system" Pod="whisker-58774bc7c6-6q4s4" WorkloadEndpoint="localhost-k8s-whisker--58774bc7c6--6q4s4-eth0" Sep 10 23:23:16.725757 containerd[1527]: 2025-09-10 23:23:16.710 [INFO][3845] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" Namespace="calico-system" Pod="whisker-58774bc7c6-6q4s4" WorkloadEndpoint="localhost-k8s-whisker--58774bc7c6--6q4s4-eth0" Sep 10 23:23:16.725795 containerd[1527]: 2025-09-10 23:23:16.711 [INFO][3845] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" Namespace="calico-system" Pod="whisker-58774bc7c6-6q4s4" WorkloadEndpoint="localhost-k8s-whisker--58774bc7c6--6q4s4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--58774bc7c6--6q4s4-eth0", GenerateName:"whisker-58774bc7c6-", Namespace:"calico-system", SelfLink:"", UID:"ad562b2b-9dc9-4ca9-ae28-b6b0af06fb52", ResourceVersion:"888", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"58774bc7c6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3", Pod:"whisker-58774bc7c6-6q4s4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia97430fc7d7", MAC:"82:40:07:15:da:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:16.725849 containerd[1527]: 2025-09-10 23:23:16.722 [INFO][3845] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" Namespace="calico-system" Pod="whisker-58774bc7c6-6q4s4" WorkloadEndpoint="localhost-k8s-whisker--58774bc7c6--6q4s4-eth0" Sep 10 23:23:16.762201 containerd[1527]: time="2025-09-10T23:23:16.762149891Z" level=info msg="connecting to shim f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3" address="unix:///run/containerd/s/827bbe4af605787c9ef94b1947ed5e2da13face5f945270775bfe1e1c7336245" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:16.812390 systemd[1]: Started cri-containerd-f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3.scope - libcontainer container f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3. Sep 10 23:23:16.835216 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:16.888048 containerd[1527]: time="2025-09-10T23:23:16.887997196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-58774bc7c6-6q4s4,Uid:ad562b2b-9dc9-4ca9-ae28-b6b0af06fb52,Namespace:calico-system,Attempt:0,} returns sandbox id \"f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3\"" Sep 10 23:23:16.889534 containerd[1527]: time="2025-09-10T23:23:16.889493388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 10 23:23:17.014498 kubelet[2678]: I0910 23:23:17.014443 2678 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b19c13c-6d14-49c5-a626-4e3c52eb3382" path="/var/lib/kubelet/pods/0b19c13c-6d14-49c5-a626-4e3c52eb3382/volumes" Sep 10 23:23:17.152670 kubelet[2678]: I0910 23:23:17.152149 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:17.169089 systemd-networkd[1425]: vxlan.calico: Link UP Sep 10 23:23:17.169099 systemd-networkd[1425]: vxlan.calico: Gained carrier Sep 10 23:23:17.730690 containerd[1527]: time="2025-09-10T23:23:17.730634692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:17.731358 containerd[1527]: time="2025-09-10T23:23:17.731317167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 10 23:23:17.732381 containerd[1527]: time="2025-09-10T23:23:17.732342719Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:17.735224 containerd[1527]: time="2025-09-10T23:23:17.735192846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:17.735877 containerd[1527]: time="2025-09-10T23:23:17.735835112Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 846.306236ms" Sep 10 23:23:17.735877 containerd[1527]: time="2025-09-10T23:23:17.735869840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 10 23:23:17.740588 containerd[1527]: time="2025-09-10T23:23:17.740522736Z" level=info msg="CreateContainer within sandbox \"f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 10 23:23:17.750304 containerd[1527]: time="2025-09-10T23:23:17.750119753Z" level=info msg="Container 60d783ddd2a8da1e84a7e4aa56206b2fa7fc31713fe8fb481e8eb0f8134196c9: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:17.752884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount688136767.mount: Deactivated successfully. Sep 10 23:23:17.758828 containerd[1527]: time="2025-09-10T23:23:17.758779318Z" level=info msg="CreateContainer within sandbox \"f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"60d783ddd2a8da1e84a7e4aa56206b2fa7fc31713fe8fb481e8eb0f8134196c9\"" Sep 10 23:23:17.759608 containerd[1527]: time="2025-09-10T23:23:17.759567777Z" level=info msg="StartContainer for \"60d783ddd2a8da1e84a7e4aa56206b2fa7fc31713fe8fb481e8eb0f8134196c9\"" Sep 10 23:23:17.763383 containerd[1527]: time="2025-09-10T23:23:17.763326310Z" level=info msg="connecting to shim 60d783ddd2a8da1e84a7e4aa56206b2fa7fc31713fe8fb481e8eb0f8134196c9" address="unix:///run/containerd/s/827bbe4af605787c9ef94b1947ed5e2da13face5f945270775bfe1e1c7336245" protocol=ttrpc version=3 Sep 10 23:23:17.793343 systemd[1]: Started cri-containerd-60d783ddd2a8da1e84a7e4aa56206b2fa7fc31713fe8fb481e8eb0f8134196c9.scope - libcontainer container 60d783ddd2a8da1e84a7e4aa56206b2fa7fc31713fe8fb481e8eb0f8134196c9. Sep 10 23:23:17.831084 containerd[1527]: time="2025-09-10T23:23:17.831040475Z" level=info msg="StartContainer for \"60d783ddd2a8da1e84a7e4aa56206b2fa7fc31713fe8fb481e8eb0f8134196c9\" returns successfully" Sep 10 23:23:17.836852 containerd[1527]: time="2025-09-10T23:23:17.836672193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 10 23:23:18.608306 systemd-networkd[1425]: vxlan.calico: Gained IPv6LL Sep 10 23:23:18.672306 systemd-networkd[1425]: calia97430fc7d7: Gained IPv6LL Sep 10 23:23:19.116562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013192565.mount: Deactivated successfully. Sep 10 23:23:19.155118 containerd[1527]: time="2025-09-10T23:23:19.155068183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:19.155682 containerd[1527]: time="2025-09-10T23:23:19.155651427Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 10 23:23:19.156746 containerd[1527]: time="2025-09-10T23:23:19.156706891Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:19.159967 containerd[1527]: time="2025-09-10T23:23:19.159928896Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:19.161127 containerd[1527]: time="2025-09-10T23:23:19.161091824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.324373341s" Sep 10 23:23:19.161176 containerd[1527]: time="2025-09-10T23:23:19.161125511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 10 23:23:19.166973 containerd[1527]: time="2025-09-10T23:23:19.166869892Z" level=info msg="CreateContainer within sandbox \"f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 10 23:23:19.175347 containerd[1527]: time="2025-09-10T23:23:19.175309167Z" level=info msg="Container 78dcef3d22139d3ff53087ad80fd695e69b7d808bcde591d6fbe93d73d7468ff: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:19.193370 containerd[1527]: time="2025-09-10T23:23:19.193328879Z" level=info msg="CreateContainer within sandbox \"f35befdbf48916e755b49fe386f1695541a4a354bb698602cc73edaa880e34b3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"78dcef3d22139d3ff53087ad80fd695e69b7d808bcde591d6fbe93d73d7468ff\"" Sep 10 23:23:19.194183 containerd[1527]: time="2025-09-10T23:23:19.194155934Z" level=info msg="StartContainer for \"78dcef3d22139d3ff53087ad80fd695e69b7d808bcde591d6fbe93d73d7468ff\"" Sep 10 23:23:19.195506 containerd[1527]: time="2025-09-10T23:23:19.195356510Z" level=info msg="connecting to shim 78dcef3d22139d3ff53087ad80fd695e69b7d808bcde591d6fbe93d73d7468ff" address="unix:///run/containerd/s/827bbe4af605787c9ef94b1947ed5e2da13face5f945270775bfe1e1c7336245" protocol=ttrpc version=3 Sep 10 23:23:19.218561 systemd[1]: Started cri-containerd-78dcef3d22139d3ff53087ad80fd695e69b7d808bcde591d6fbe93d73d7468ff.scope - libcontainer container 78dcef3d22139d3ff53087ad80fd695e69b7d808bcde591d6fbe93d73d7468ff. Sep 10 23:23:19.256584 containerd[1527]: time="2025-09-10T23:23:19.256530998Z" level=info msg="StartContainer for \"78dcef3d22139d3ff53087ad80fd695e69b7d808bcde591d6fbe93d73d7468ff\" returns successfully" Sep 10 23:23:20.187162 kubelet[2678]: I0910 23:23:20.186552 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-58774bc7c6-6q4s4" podStartSLOduration=1.914041681 podStartE2EDuration="4.186534631s" podCreationTimestamp="2025-09-10 23:23:16 +0000 UTC" firstStartedPulling="2025-09-10 23:23:16.889311105 +0000 UTC m=+31.969798761" lastFinishedPulling="2025-09-10 23:23:19.161804055 +0000 UTC m=+34.242291711" observedRunningTime="2025-09-10 23:23:20.18565473 +0000 UTC m=+35.266142466" watchObservedRunningTime="2025-09-10 23:23:20.186534631 +0000 UTC m=+35.267022368" Sep 10 23:23:20.896994 kubelet[2678]: I0910 23:23:20.896913 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:21.042651 containerd[1527]: time="2025-09-10T23:23:21.042577263Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4875ef8629b196357d714d3d9ca5c8533c911f4b25f0e7a660a064dfc65017de\" id:\"409953efd6347c8b2b30d0071dd5e835bb6b8f4eef8dc12d6058bbc072cf6ceb\" pid:4217 exited_at:{seconds:1757546601 nanos:42192506}" Sep 10 23:23:21.133024 containerd[1527]: time="2025-09-10T23:23:21.132975230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4875ef8629b196357d714d3d9ca5c8533c911f4b25f0e7a660a064dfc65017de\" id:\"fe4f671c97b465a3c42e883a4da45140fbb4d2819ba38531e3dccd43c77502a9\" pid:4242 exited_at:{seconds:1757546601 nanos:132679291}" Sep 10 23:23:23.009312 containerd[1527]: time="2025-09-10T23:23:23.008554265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9dwv,Uid:067c0573-11b6-4149-ab4e-f6083aab0f6f,Namespace:kube-system,Attempt:0,}" Sep 10 23:23:23.009312 containerd[1527]: time="2025-09-10T23:23:23.009180303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b95bdd5f-cfktv,Uid:01a270a8-5eb9-47d7-82a0-d986ee861f20,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:23.009928 containerd[1527]: time="2025-09-10T23:23:23.009557375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6d9bb89-twvpl,Uid:769543eb-ef20-4f8e-9322-92751538eec2,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:23.205132 systemd-networkd[1425]: cali196180250d1: Link UP Sep 10 23:23:23.205860 systemd-networkd[1425]: cali196180250d1: Gained carrier Sep 10 23:23:23.222923 containerd[1527]: 2025-09-10 23:23:23.067 [INFO][4264] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0 coredns-674b8bbfcf- kube-system 067c0573-11b6-4149-ab4e-f6083aab0f6f 815 0 2025-09-10 23:22:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-t9dwv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali196180250d1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9dwv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9dwv-" Sep 10 23:23:23.222923 containerd[1527]: 2025-09-10 23:23:23.068 [INFO][4264] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9dwv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0" Sep 10 23:23:23.222923 containerd[1527]: 2025-09-10 23:23:23.112 [INFO][4308] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" HandleID="k8s-pod-network.1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" Workload="localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0" Sep 10 23:23:23.223122 containerd[1527]: 2025-09-10 23:23:23.112 [INFO][4308] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" HandleID="k8s-pod-network.1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" Workload="localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-t9dwv", "timestamp":"2025-09-10 23:23:23.112341047 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:23.223122 containerd[1527]: 2025-09-10 23:23:23.112 [INFO][4308] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:23.223122 containerd[1527]: 2025-09-10 23:23:23.112 [INFO][4308] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:23.223122 containerd[1527]: 2025-09-10 23:23:23.112 [INFO][4308] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:23.223122 containerd[1527]: 2025-09-10 23:23:23.124 [INFO][4308] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" host="localhost" Sep 10 23:23:23.223122 containerd[1527]: 2025-09-10 23:23:23.129 [INFO][4308] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:23.223122 containerd[1527]: 2025-09-10 23:23:23.135 [INFO][4308] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:23.223122 containerd[1527]: 2025-09-10 23:23:23.137 [INFO][4308] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:23.223122 containerd[1527]: 2025-09-10 23:23:23.140 [INFO][4308] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:23.223122 containerd[1527]: 2025-09-10 23:23:23.140 [INFO][4308] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" host="localhost" Sep 10 23:23:23.224275 containerd[1527]: 2025-09-10 23:23:23.142 [INFO][4308] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791 Sep 10 23:23:23.224275 containerd[1527]: 2025-09-10 23:23:23.176 [INFO][4308] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" host="localhost" Sep 10 23:23:23.224275 containerd[1527]: 2025-09-10 23:23:23.200 [INFO][4308] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" host="localhost" Sep 10 23:23:23.224275 containerd[1527]: 2025-09-10 23:23:23.201 [INFO][4308] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" host="localhost" Sep 10 23:23:23.224275 containerd[1527]: 2025-09-10 23:23:23.201 [INFO][4308] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:23.224275 containerd[1527]: 2025-09-10 23:23:23.201 [INFO][4308] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" HandleID="k8s-pod-network.1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" Workload="localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0" Sep 10 23:23:23.224477 containerd[1527]: 2025-09-10 23:23:23.202 [INFO][4264] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9dwv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"067c0573-11b6-4149-ab4e-f6083aab0f6f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-t9dwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali196180250d1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:23.224554 containerd[1527]: 2025-09-10 23:23:23.203 [INFO][4264] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9dwv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0" Sep 10 23:23:23.224554 containerd[1527]: 2025-09-10 23:23:23.203 [INFO][4264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali196180250d1 ContainerID="1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9dwv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0" Sep 10 23:23:23.224554 containerd[1527]: 2025-09-10 23:23:23.205 [INFO][4264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9dwv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0" Sep 10 23:23:23.224630 containerd[1527]: 2025-09-10 23:23:23.209 [INFO][4264] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9dwv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"067c0573-11b6-4149-ab4e-f6083aab0f6f", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791", Pod:"coredns-674b8bbfcf-t9dwv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali196180250d1", MAC:"c6:ff:a0:bb:2f:56", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:23.224630 containerd[1527]: 2025-09-10 23:23:23.219 [INFO][4264] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9dwv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--t9dwv-eth0" Sep 10 23:23:23.268759 systemd-networkd[1425]: cali70bf28b73cc: Link UP Sep 10 23:23:23.270911 systemd-networkd[1425]: cali70bf28b73cc: Gained carrier Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.067 [INFO][4275] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0 calico-apiserver-67f6d9bb89- calico-apiserver 769543eb-ef20-4f8e-9322-92751538eec2 819 0 2025-09-10 23:23:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67f6d9bb89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67f6d9bb89-twvpl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali70bf28b73cc [] [] }} ContainerID="5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-twvpl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.067 [INFO][4275] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-twvpl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.112 [INFO][4306] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" HandleID="k8s-pod-network.5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" Workload="localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.112 [INFO][4306] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" HandleID="k8s-pod-network.5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" Workload="localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dd740), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67f6d9bb89-twvpl", "timestamp":"2025-09-10 23:23:23.112413261 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.112 [INFO][4306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.201 [INFO][4306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.201 [INFO][4306] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.225 [INFO][4306] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" host="localhost" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.230 [INFO][4306] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.234 [INFO][4306] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.238 [INFO][4306] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.240 [INFO][4306] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.240 [INFO][4306] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" host="localhost" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.248 [INFO][4306] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77 Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.253 [INFO][4306] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" host="localhost" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.261 [INFO][4306] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" host="localhost" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.261 [INFO][4306] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" host="localhost" Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.261 [INFO][4306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:23.289277 containerd[1527]: 2025-09-10 23:23:23.261 [INFO][4306] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" HandleID="k8s-pod-network.5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" Workload="localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0" Sep 10 23:23:23.290014 containerd[1527]: 2025-09-10 23:23:23.264 [INFO][4275] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-twvpl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0", GenerateName:"calico-apiserver-67f6d9bb89-", Namespace:"calico-apiserver", SelfLink:"", UID:"769543eb-ef20-4f8e-9322-92751538eec2", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f6d9bb89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67f6d9bb89-twvpl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70bf28b73cc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:23.290014 containerd[1527]: 2025-09-10 23:23:23.264 [INFO][4275] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-twvpl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0" Sep 10 23:23:23.290014 containerd[1527]: 2025-09-10 23:23:23.264 [INFO][4275] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70bf28b73cc ContainerID="5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-twvpl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0" Sep 10 23:23:23.290014 containerd[1527]: 2025-09-10 23:23:23.272 [INFO][4275] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-twvpl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0" Sep 10 23:23:23.290014 containerd[1527]: 2025-09-10 23:23:23.272 [INFO][4275] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-twvpl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0", GenerateName:"calico-apiserver-67f6d9bb89-", Namespace:"calico-apiserver", SelfLink:"", UID:"769543eb-ef20-4f8e-9322-92751538eec2", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f6d9bb89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77", Pod:"calico-apiserver-67f6d9bb89-twvpl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali70bf28b73cc", MAC:"1a:8e:a8:0c:e9:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:23.290014 containerd[1527]: 2025-09-10 23:23:23.283 [INFO][4275] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-twvpl" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--twvpl-eth0" Sep 10 23:23:23.298132 containerd[1527]: time="2025-09-10T23:23:23.297765984Z" level=info msg="connecting to shim 1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791" address="unix:///run/containerd/s/9c20c95192a0667b60f8aec407aceac19ece0267fdaee2a1d8a6ca63a6b5e0cc" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:23.323532 systemd[1]: Started cri-containerd-1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791.scope - libcontainer container 1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791. Sep 10 23:23:23.332163 containerd[1527]: time="2025-09-10T23:23:23.331770372Z" level=info msg="connecting to shim 5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77" address="unix:///run/containerd/s/bdf044b0af5bfaaf86d95c826c95f77056b46784557e4ce8f6f8848595ed3c3c" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:23.345621 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:23.361465 systemd[1]: Started cri-containerd-5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77.scope - libcontainer container 5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77. Sep 10 23:23:23.380311 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:23.381977 containerd[1527]: time="2025-09-10T23:23:23.381929055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9dwv,Uid:067c0573-11b6-4149-ab4e-f6083aab0f6f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791\"" Sep 10 23:23:23.382486 systemd-networkd[1425]: cali57c3485227c: Link UP Sep 10 23:23:23.383922 systemd-networkd[1425]: cali57c3485227c: Gained carrier Sep 10 23:23:23.398791 containerd[1527]: time="2025-09-10T23:23:23.398553959Z" level=info msg="CreateContainer within sandbox \"1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.087 [INFO][4288] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0 calico-kube-controllers-57b95bdd5f- calico-system 01a270a8-5eb9-47d7-82a0-d986ee861f20 818 0 2025-09-10 23:23:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57b95bdd5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-57b95bdd5f-cfktv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali57c3485227c [] [] }} ContainerID="f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" Namespace="calico-system" Pod="calico-kube-controllers-57b95bdd5f-cfktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.087 [INFO][4288] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" Namespace="calico-system" Pod="calico-kube-controllers-57b95bdd5f-cfktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.125 [INFO][4321] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" HandleID="k8s-pod-network.f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" Workload="localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.125 [INFO][4321] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" HandleID="k8s-pod-network.f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" Workload="localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d940), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-57b95bdd5f-cfktv", "timestamp":"2025-09-10 23:23:23.125565747 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.125 [INFO][4321] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.261 [INFO][4321] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.261 [INFO][4321] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.325 [INFO][4321] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" host="localhost" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.337 [INFO][4321] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.344 [INFO][4321] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.349 [INFO][4321] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.352 [INFO][4321] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.352 [INFO][4321] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" host="localhost" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.356 [INFO][4321] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986 Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.363 [INFO][4321] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" host="localhost" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.375 [INFO][4321] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" host="localhost" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.375 [INFO][4321] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" host="localhost" Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.375 [INFO][4321] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:23.404394 containerd[1527]: 2025-09-10 23:23:23.375 [INFO][4321] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" HandleID="k8s-pod-network.f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" Workload="localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0" Sep 10 23:23:23.404943 containerd[1527]: 2025-09-10 23:23:23.377 [INFO][4288] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" Namespace="calico-system" Pod="calico-kube-controllers-57b95bdd5f-cfktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0", GenerateName:"calico-kube-controllers-57b95bdd5f-", Namespace:"calico-system", SelfLink:"", UID:"01a270a8-5eb9-47d7-82a0-d986ee861f20", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57b95bdd5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-57b95bdd5f-cfktv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali57c3485227c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:23.404943 containerd[1527]: 2025-09-10 23:23:23.378 [INFO][4288] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" Namespace="calico-system" Pod="calico-kube-controllers-57b95bdd5f-cfktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0" Sep 10 23:23:23.404943 containerd[1527]: 2025-09-10 23:23:23.378 [INFO][4288] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali57c3485227c ContainerID="f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" Namespace="calico-system" Pod="calico-kube-controllers-57b95bdd5f-cfktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0" Sep 10 23:23:23.404943 containerd[1527]: 2025-09-10 23:23:23.384 [INFO][4288] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" Namespace="calico-system" Pod="calico-kube-controllers-57b95bdd5f-cfktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0" Sep 10 23:23:23.404943 containerd[1527]: 2025-09-10 23:23:23.385 [INFO][4288] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" Namespace="calico-system" Pod="calico-kube-controllers-57b95bdd5f-cfktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0", GenerateName:"calico-kube-controllers-57b95bdd5f-", Namespace:"calico-system", SelfLink:"", UID:"01a270a8-5eb9-47d7-82a0-d986ee861f20", ResourceVersion:"818", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57b95bdd5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986", Pod:"calico-kube-controllers-57b95bdd5f-cfktv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali57c3485227c", MAC:"02:27:65:dd:fa:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:23.404943 containerd[1527]: 2025-09-10 23:23:23.399 [INFO][4288] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" Namespace="calico-system" Pod="calico-kube-controllers-57b95bdd5f-cfktv" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--57b95bdd5f--cfktv-eth0" Sep 10 23:23:23.420222 containerd[1527]: time="2025-09-10T23:23:23.420113395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6d9bb89-twvpl,Uid:769543eb-ef20-4f8e-9322-92751538eec2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77\"" Sep 10 23:23:23.422819 containerd[1527]: time="2025-09-10T23:23:23.422626710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 10 23:23:23.429704 containerd[1527]: time="2025-09-10T23:23:23.429633515Z" level=info msg="Container e95e41ebc65d71f43dd8015d08e5dd4d66f29b6924fb6680e365c81047a09a1f: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:23.438751 containerd[1527]: time="2025-09-10T23:23:23.438690827Z" level=info msg="CreateContainer within sandbox \"1766d65386947a471709839d96f360ff13cf5ebb5be1a8478fdf05fbc585d791\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e95e41ebc65d71f43dd8015d08e5dd4d66f29b6924fb6680e365c81047a09a1f\"" Sep 10 23:23:23.439458 containerd[1527]: time="2025-09-10T23:23:23.439414204Z" level=info msg="StartContainer for \"e95e41ebc65d71f43dd8015d08e5dd4d66f29b6924fb6680e365c81047a09a1f\"" Sep 10 23:23:23.440625 containerd[1527]: time="2025-09-10T23:23:23.440590306Z" level=info msg="connecting to shim e95e41ebc65d71f43dd8015d08e5dd4d66f29b6924fb6680e365c81047a09a1f" address="unix:///run/containerd/s/9c20c95192a0667b60f8aec407aceac19ece0267fdaee2a1d8a6ca63a6b5e0cc" protocol=ttrpc version=3 Sep 10 23:23:23.446356 containerd[1527]: time="2025-09-10T23:23:23.446307427Z" level=info msg="connecting to shim f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986" address="unix:///run/containerd/s/9f9758e06f955c5c23f96ef12f66fd65f02a008a8f5102fc73c982722b4b7131" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:23.461356 systemd[1]: Started cri-containerd-e95e41ebc65d71f43dd8015d08e5dd4d66f29b6924fb6680e365c81047a09a1f.scope - libcontainer container e95e41ebc65d71f43dd8015d08e5dd4d66f29b6924fb6680e365c81047a09a1f. Sep 10 23:23:23.466325 systemd[1]: Started cri-containerd-f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986.scope - libcontainer container f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986. Sep 10 23:23:23.482507 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:23.498490 containerd[1527]: time="2025-09-10T23:23:23.498275492Z" level=info msg="StartContainer for \"e95e41ebc65d71f43dd8015d08e5dd4d66f29b6924fb6680e365c81047a09a1f\" returns successfully" Sep 10 23:23:23.518738 containerd[1527]: time="2025-09-10T23:23:23.518590093Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57b95bdd5f-cfktv,Uid:01a270a8-5eb9-47d7-82a0-d986ee861f20,Namespace:calico-system,Attempt:0,} returns sandbox id \"f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986\"" Sep 10 23:23:24.201804 kubelet[2678]: I0910 23:23:24.201378 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t9dwv" podStartSLOduration=33.201330525 podStartE2EDuration="33.201330525s" podCreationTimestamp="2025-09-10 23:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:23:24.200983341 +0000 UTC m=+39.281471037" watchObservedRunningTime="2025-09-10 23:23:24.201330525 +0000 UTC m=+39.281818221" Sep 10 23:23:24.432280 systemd-networkd[1425]: cali196180250d1: Gained IPv6LL Sep 10 23:23:24.880542 systemd-networkd[1425]: cali57c3485227c: Gained IPv6LL Sep 10 23:23:25.005426 containerd[1527]: time="2025-09-10T23:23:25.005376776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gh44n,Uid:6e9b8687-e618-4f3f-b612-90e366865f02,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:25.009922 systemd-networkd[1425]: cali70bf28b73cc: Gained IPv6LL Sep 10 23:23:25.068433 containerd[1527]: time="2025-09-10T23:23:25.068371515Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:25.072376 containerd[1527]: time="2025-09-10T23:23:25.072291298Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 10 23:23:25.074897 containerd[1527]: time="2025-09-10T23:23:25.074812831Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:25.077524 containerd[1527]: time="2025-09-10T23:23:25.077486910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:25.078332 containerd[1527]: time="2025-09-10T23:23:25.078185356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 1.655511757s" Sep 10 23:23:25.078332 containerd[1527]: time="2025-09-10T23:23:25.078230284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 10 23:23:25.080244 containerd[1527]: time="2025-09-10T23:23:25.080202117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 10 23:23:25.084186 containerd[1527]: time="2025-09-10T23:23:25.084124661Z" level=info msg="CreateContainer within sandbox \"5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:23:25.100178 containerd[1527]: time="2025-09-10T23:23:25.099810114Z" level=info msg="Container a706e9bed5ec442b4e41b5e04d89f3c5a091404412630cbf4f7b9b043b5cd89d: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:25.113445 containerd[1527]: time="2025-09-10T23:23:25.113340941Z" level=info msg="CreateContainer within sandbox \"5d98482f260c900f8cbf35af53acdf11512c4715e1e1f38e25dd0c54e1569d77\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a706e9bed5ec442b4e41b5e04d89f3c5a091404412630cbf4f7b9b043b5cd89d\"" Sep 10 23:23:25.114711 containerd[1527]: time="2025-09-10T23:23:25.114580964Z" level=info msg="StartContainer for \"a706e9bed5ec442b4e41b5e04d89f3c5a091404412630cbf4f7b9b043b5cd89d\"" Sep 10 23:23:25.116633 containerd[1527]: time="2025-09-10T23:23:25.116586244Z" level=info msg="connecting to shim a706e9bed5ec442b4e41b5e04d89f3c5a091404412630cbf4f7b9b043b5cd89d" address="unix:///run/containerd/s/bdf044b0af5bfaaf86d95c826c95f77056b46784557e4ce8f6f8848595ed3c3c" protocol=ttrpc version=3 Sep 10 23:23:25.150322 systemd[1]: Started cri-containerd-a706e9bed5ec442b4e41b5e04d89f3c5a091404412630cbf4f7b9b043b5cd89d.scope - libcontainer container a706e9bed5ec442b4e41b5e04d89f3c5a091404412630cbf4f7b9b043b5cd89d. Sep 10 23:23:25.152977 systemd-networkd[1425]: caliee083c89840: Link UP Sep 10 23:23:25.153336 systemd-networkd[1425]: caliee083c89840: Gained carrier Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.070 [INFO][4543] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--gh44n-eth0 goldmane-54d579b49d- calico-system 6e9b8687-e618-4f3f-b612-90e366865f02 820 0 2025-09-10 23:23:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-gh44n eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] caliee083c89840 [] [] }} ContainerID="1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" Namespace="calico-system" Pod="goldmane-54d579b49d-gh44n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gh44n-" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.070 [INFO][4543] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" Namespace="calico-system" Pod="goldmane-54d579b49d-gh44n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gh44n-eth0" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.102 [INFO][4561] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" HandleID="k8s-pod-network.1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" Workload="localhost-k8s-goldmane--54d579b49d--gh44n-eth0" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.103 [INFO][4561] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" HandleID="k8s-pod-network.1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" Workload="localhost-k8s-goldmane--54d579b49d--gh44n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c570), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-gh44n", "timestamp":"2025-09-10 23:23:25.102254353 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.104 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.104 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.104 [INFO][4561] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.115 [INFO][4561] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" host="localhost" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.121 [INFO][4561] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.127 [INFO][4561] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.129 [INFO][4561] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.131 [INFO][4561] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.131 [INFO][4561] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" host="localhost" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.133 [INFO][4561] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9 Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.138 [INFO][4561] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" host="localhost" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.146 [INFO][4561] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" host="localhost" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.146 [INFO][4561] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" host="localhost" Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.146 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:25.171473 containerd[1527]: 2025-09-10 23:23:25.146 [INFO][4561] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" HandleID="k8s-pod-network.1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" Workload="localhost-k8s-goldmane--54d579b49d--gh44n-eth0" Sep 10 23:23:25.172023 containerd[1527]: 2025-09-10 23:23:25.148 [INFO][4543] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" Namespace="calico-system" Pod="goldmane-54d579b49d-gh44n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gh44n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--gh44n-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6e9b8687-e618-4f3f-b612-90e366865f02", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-gh44n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee083c89840", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:25.172023 containerd[1527]: 2025-09-10 23:23:25.148 [INFO][4543] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" Namespace="calico-system" Pod="goldmane-54d579b49d-gh44n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gh44n-eth0" Sep 10 23:23:25.172023 containerd[1527]: 2025-09-10 23:23:25.148 [INFO][4543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee083c89840 ContainerID="1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" Namespace="calico-system" Pod="goldmane-54d579b49d-gh44n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gh44n-eth0" Sep 10 23:23:25.172023 containerd[1527]: 2025-09-10 23:23:25.153 [INFO][4543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" Namespace="calico-system" Pod="goldmane-54d579b49d-gh44n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gh44n-eth0" Sep 10 23:23:25.172023 containerd[1527]: 2025-09-10 23:23:25.154 [INFO][4543] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" Namespace="calico-system" Pod="goldmane-54d579b49d-gh44n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gh44n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--gh44n-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6e9b8687-e618-4f3f-b612-90e366865f02", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9", Pod:"goldmane-54d579b49d-gh44n", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"caliee083c89840", MAC:"0a:7a:51:e9:a1:2a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:25.172023 containerd[1527]: 2025-09-10 23:23:25.167 [INFO][4543] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" Namespace="calico-system" Pod="goldmane-54d579b49d-gh44n" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--gh44n-eth0" Sep 10 23:23:25.203745 containerd[1527]: time="2025-09-10T23:23:25.203689827Z" level=info msg="connecting to shim 1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9" address="unix:///run/containerd/s/e4215e28e48f4fd3931b5648904d515c1ca69229e9e7a73029e427190374e050" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:25.228993 containerd[1527]: time="2025-09-10T23:23:25.228932515Z" level=info msg="StartContainer for \"a706e9bed5ec442b4e41b5e04d89f3c5a091404412630cbf4f7b9b043b5cd89d\" returns successfully" Sep 10 23:23:25.230867 systemd[1]: Started cri-containerd-1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9.scope - libcontainer container 1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9. Sep 10 23:23:25.263277 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:25.304597 containerd[1527]: time="2025-09-10T23:23:25.304550918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-gh44n,Uid:6e9b8687-e618-4f3f-b612-90e366865f02,Namespace:calico-system,Attempt:0,} returns sandbox id \"1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9\"" Sep 10 23:23:26.009667 containerd[1527]: time="2025-09-10T23:23:26.009624708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vxr4,Uid:81ee5a93-1f6d-400e-9d58-87b6a038fce5,Namespace:kube-system,Attempt:0,}" Sep 10 23:23:26.173114 systemd-networkd[1425]: cali2700f0958a2: Link UP Sep 10 23:23:26.174148 systemd-networkd[1425]: cali2700f0958a2: Gained carrier Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.057 [INFO][4667] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0 coredns-674b8bbfcf- kube-system 81ee5a93-1f6d-400e-9d58-87b6a038fce5 822 0 2025-09-10 23:22:51 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-2vxr4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2700f0958a2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vxr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vxr4-" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.057 [INFO][4667] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vxr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.090 [INFO][4681] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" HandleID="k8s-pod-network.237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" Workload="localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.090 [INFO][4681] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" HandleID="k8s-pod-network.237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" Workload="localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c760), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-2vxr4", "timestamp":"2025-09-10 23:23:26.090192685 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.090 [INFO][4681] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.090 [INFO][4681] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.090 [INFO][4681] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.115 [INFO][4681] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" host="localhost" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.129 [INFO][4681] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.144 [INFO][4681] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.147 [INFO][4681] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.151 [INFO][4681] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.151 [INFO][4681] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" host="localhost" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.153 [INFO][4681] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.158 [INFO][4681] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" host="localhost" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.165 [INFO][4681] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" host="localhost" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.165 [INFO][4681] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" host="localhost" Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.165 [INFO][4681] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:26.190497 containerd[1527]: 2025-09-10 23:23:26.165 [INFO][4681] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" HandleID="k8s-pod-network.237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" Workload="localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0" Sep 10 23:23:26.193199 containerd[1527]: 2025-09-10 23:23:26.167 [INFO][4667] cni-plugin/k8s.go 418: Populated endpoint ContainerID="237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vxr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"81ee5a93-1f6d-400e-9d58-87b6a038fce5", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-2vxr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2700f0958a2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:26.193199 containerd[1527]: 2025-09-10 23:23:26.168 [INFO][4667] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vxr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0" Sep 10 23:23:26.193199 containerd[1527]: 2025-09-10 23:23:26.168 [INFO][4667] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2700f0958a2 ContainerID="237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vxr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0" Sep 10 23:23:26.193199 containerd[1527]: 2025-09-10 23:23:26.174 [INFO][4667] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vxr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0" Sep 10 23:23:26.193199 containerd[1527]: 2025-09-10 23:23:26.176 [INFO][4667] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vxr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"81ee5a93-1f6d-400e-9d58-87b6a038fce5", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 22, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c", Pod:"coredns-674b8bbfcf-2vxr4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2700f0958a2", MAC:"9a:46:64:63:ec:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:26.193199 containerd[1527]: 2025-09-10 23:23:26.185 [INFO][4667] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" Namespace="kube-system" Pod="coredns-674b8bbfcf-2vxr4" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--2vxr4-eth0" Sep 10 23:23:26.282833 containerd[1527]: time="2025-09-10T23:23:26.282684205Z" level=info msg="connecting to shim 237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c" address="unix:///run/containerd/s/a084ce2d38d6a6dca724bfa291e579f6cf5a90dfadf885edc1f352bb49f78b34" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:26.311069 kubelet[2678]: I0910 23:23:26.311006 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67f6d9bb89-twvpl" podStartSLOduration=24.653925998 podStartE2EDuration="26.310987277s" podCreationTimestamp="2025-09-10 23:23:00 +0000 UTC" firstStartedPulling="2025-09-10 23:23:23.422306049 +0000 UTC m=+38.502793745" lastFinishedPulling="2025-09-10 23:23:25.079367328 +0000 UTC m=+40.159855024" observedRunningTime="2025-09-10 23:23:26.307132843 +0000 UTC m=+41.387620539" watchObservedRunningTime="2025-09-10 23:23:26.310987277 +0000 UTC m=+41.391474973" Sep 10 23:23:26.329368 systemd[1]: Started cri-containerd-237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c.scope - libcontainer container 237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c. Sep 10 23:23:26.352558 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:26.414314 containerd[1527]: time="2025-09-10T23:23:26.414273909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2vxr4,Uid:81ee5a93-1f6d-400e-9d58-87b6a038fce5,Namespace:kube-system,Attempt:0,} returns sandbox id \"237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c\"" Sep 10 23:23:26.421153 containerd[1527]: time="2025-09-10T23:23:26.421103624Z" level=info msg="CreateContainer within sandbox \"237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 23:23:26.440107 containerd[1527]: time="2025-09-10T23:23:26.439459716Z" level=info msg="Container 14903a701cb5b78e77e4bf40c8f93ca0d81a36fed69557cce8636971b9af6043: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:26.448374 systemd[1]: Started sshd@7-10.0.0.24:22-10.0.0.1:34254.service - OpenSSH per-connection server daemon (10.0.0.1:34254). Sep 10 23:23:26.453500 containerd[1527]: time="2025-09-10T23:23:26.453430200Z" level=info msg="CreateContainer within sandbox \"237b75b487ed5e9977429d7a46d5aa13033d82fa89648e5738be3262dc85e49c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"14903a701cb5b78e77e4bf40c8f93ca0d81a36fed69557cce8636971b9af6043\"" Sep 10 23:23:26.454681 containerd[1527]: time="2025-09-10T23:23:26.454345841Z" level=info msg="StartContainer for \"14903a701cb5b78e77e4bf40c8f93ca0d81a36fed69557cce8636971b9af6043\"" Sep 10 23:23:26.458521 containerd[1527]: time="2025-09-10T23:23:26.457121366Z" level=info msg="connecting to shim 14903a701cb5b78e77e4bf40c8f93ca0d81a36fed69557cce8636971b9af6043" address="unix:///run/containerd/s/a084ce2d38d6a6dca724bfa291e579f6cf5a90dfadf885edc1f352bb49f78b34" protocol=ttrpc version=3 Sep 10 23:23:26.483360 systemd[1]: Started cri-containerd-14903a701cb5b78e77e4bf40c8f93ca0d81a36fed69557cce8636971b9af6043.scope - libcontainer container 14903a701cb5b78e77e4bf40c8f93ca0d81a36fed69557cce8636971b9af6043. Sep 10 23:23:26.530150 sshd[4752]: Accepted publickey for core from 10.0.0.1 port 34254 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:26.531750 sshd-session[4752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:26.538221 systemd-logind[1498]: New session 8 of user core. Sep 10 23:23:26.546468 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 23:23:26.564724 containerd[1527]: time="2025-09-10T23:23:26.564667744Z" level=info msg="StartContainer for \"14903a701cb5b78e77e4bf40c8f93ca0d81a36fed69557cce8636971b9af6043\" returns successfully" Sep 10 23:23:26.800333 systemd-networkd[1425]: caliee083c89840: Gained IPv6LL Sep 10 23:23:26.913951 sshd[4783]: Connection closed by 10.0.0.1 port 34254 Sep 10 23:23:26.914624 sshd-session[4752]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:26.920458 systemd[1]: sshd@7-10.0.0.24:22-10.0.0.1:34254.service: Deactivated successfully. Sep 10 23:23:26.928201 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 23:23:26.929634 systemd-logind[1498]: Session 8 logged out. Waiting for processes to exit. Sep 10 23:23:26.932010 systemd-logind[1498]: Removed session 8. Sep 10 23:23:27.006475 containerd[1527]: time="2025-09-10T23:23:27.006112883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f7d6b4-whjln,Uid:b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:27.007191 containerd[1527]: time="2025-09-10T23:23:27.006803761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bbr5,Uid:41e05e41-b646-4243-9c89-ac4eb4228756,Namespace:calico-system,Attempt:0,}" Sep 10 23:23:27.007548 containerd[1527]: time="2025-09-10T23:23:27.007521083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6d9bb89-rw98g,Uid:15b1b1a9-9293-4e1d-8c21-5986b0198355,Namespace:calico-apiserver,Attempt:0,}" Sep 10 23:23:27.230618 kubelet[2678]: I0910 23:23:27.230080 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:27.233497 systemd-networkd[1425]: cali6851f0d1874: Link UP Sep 10 23:23:27.234476 systemd-networkd[1425]: cali6851f0d1874: Gained carrier Sep 10 23:23:27.247657 kubelet[2678]: I0910 23:23:27.246392 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2vxr4" podStartSLOduration=36.24637261 podStartE2EDuration="36.24637261s" podCreationTimestamp="2025-09-10 23:22:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:23:27.244104303 +0000 UTC m=+42.324591999" watchObservedRunningTime="2025-09-10 23:23:27.24637261 +0000 UTC m=+42.326860306" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.102 [INFO][4833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0 calico-apiserver-67f6d9bb89- calico-apiserver 15b1b1a9-9293-4e1d-8c21-5986b0198355 817 0 2025-09-10 23:23:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:67f6d9bb89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-67f6d9bb89-rw98g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6851f0d1874 [] [] }} ContainerID="9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-rw98g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.102 [INFO][4833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-rw98g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.166 [INFO][4865] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" HandleID="k8s-pod-network.9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" Workload="localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.166 [INFO][4865] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" HandleID="k8s-pod-network.9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" Workload="localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000119980), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-67f6d9bb89-rw98g", "timestamp":"2025-09-10 23:23:27.166555134 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.166 [INFO][4865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.166 [INFO][4865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.166 [INFO][4865] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.183 [INFO][4865] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" host="localhost" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.193 [INFO][4865] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.201 [INFO][4865] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.203 [INFO][4865] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.205 [INFO][4865] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.205 [INFO][4865] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" host="localhost" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.207 [INFO][4865] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.211 [INFO][4865] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" host="localhost" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.219 [INFO][4865] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" host="localhost" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.219 [INFO][4865] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" host="localhost" Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.219 [INFO][4865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:27.257730 containerd[1527]: 2025-09-10 23:23:27.219 [INFO][4865] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" HandleID="k8s-pod-network.9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" Workload="localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0" Sep 10 23:23:27.259309 containerd[1527]: 2025-09-10 23:23:27.225 [INFO][4833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-rw98g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0", GenerateName:"calico-apiserver-67f6d9bb89-", Namespace:"calico-apiserver", SelfLink:"", UID:"15b1b1a9-9293-4e1d-8c21-5986b0198355", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f6d9bb89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-67f6d9bb89-rw98g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6851f0d1874", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:27.259309 containerd[1527]: 2025-09-10 23:23:27.225 [INFO][4833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-rw98g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0" Sep 10 23:23:27.259309 containerd[1527]: 2025-09-10 23:23:27.225 [INFO][4833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6851f0d1874 ContainerID="9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-rw98g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0" Sep 10 23:23:27.259309 containerd[1527]: 2025-09-10 23:23:27.233 [INFO][4833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-rw98g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0" Sep 10 23:23:27.259309 containerd[1527]: 2025-09-10 23:23:27.236 [INFO][4833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-rw98g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0", GenerateName:"calico-apiserver-67f6d9bb89-", Namespace:"calico-apiserver", SelfLink:"", UID:"15b1b1a9-9293-4e1d-8c21-5986b0198355", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"67f6d9bb89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc", Pod:"calico-apiserver-67f6d9bb89-rw98g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6851f0d1874", MAC:"de:69:fe:02:90:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:27.259309 containerd[1527]: 2025-09-10 23:23:27.253 [INFO][4833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" Namespace="calico-apiserver" Pod="calico-apiserver-67f6d9bb89-rw98g" WorkloadEndpoint="localhost-k8s-calico--apiserver--67f6d9bb89--rw98g-eth0" Sep 10 23:23:27.317843 containerd[1527]: time="2025-09-10T23:23:27.317589458Z" level=info msg="connecting to shim 9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc" address="unix:///run/containerd/s/2da7935ec011a175f8f51bba3b01d9ee29f8590b375ef79efa678d99ead8807d" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:27.356966 systemd-networkd[1425]: cali843e530e283: Link UP Sep 10 23:23:27.358692 systemd-networkd[1425]: cali843e530e283: Gained carrier Sep 10 23:23:27.392690 systemd[1]: Started cri-containerd-9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc.scope - libcontainer container 9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc. Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.104 [INFO][4824] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--5bbr5-eth0 csi-node-driver- calico-system 41e05e41-b646-4243-9c89-ac4eb4228756 717 0 2025-09-10 23:23:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-5bbr5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali843e530e283 [] [] }} ContainerID="8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" Namespace="calico-system" Pod="csi-node-driver-5bbr5" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bbr5-" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.104 [INFO][4824] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" Namespace="calico-system" Pod="csi-node-driver-5bbr5" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bbr5-eth0" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.166 [INFO][4863] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" HandleID="k8s-pod-network.8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" Workload="localhost-k8s-csi--node--driver--5bbr5-eth0" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.166 [INFO][4863] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" HandleID="k8s-pod-network.8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" Workload="localhost-k8s-csi--node--driver--5bbr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400042c0e0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-5bbr5", "timestamp":"2025-09-10 23:23:27.166301931 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.166 [INFO][4863] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.219 [INFO][4863] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.219 [INFO][4863] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.284 [INFO][4863] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" host="localhost" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.294 [INFO][4863] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.317 [INFO][4863] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.320 [INFO][4863] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.324 [INFO][4863] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.324 [INFO][4863] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" host="localhost" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.328 [INFO][4863] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53 Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.336 [INFO][4863] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" host="localhost" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.343 [INFO][4863] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" host="localhost" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.343 [INFO][4863] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" host="localhost" Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.343 [INFO][4863] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:27.395773 containerd[1527]: 2025-09-10 23:23:27.343 [INFO][4863] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" HandleID="k8s-pod-network.8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" Workload="localhost-k8s-csi--node--driver--5bbr5-eth0" Sep 10 23:23:27.396495 containerd[1527]: 2025-09-10 23:23:27.349 [INFO][4824] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" Namespace="calico-system" Pod="csi-node-driver-5bbr5" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bbr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5bbr5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41e05e41-b646-4243-9c89-ac4eb4228756", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-5bbr5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali843e530e283", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:27.396495 containerd[1527]: 2025-09-10 23:23:27.351 [INFO][4824] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" Namespace="calico-system" Pod="csi-node-driver-5bbr5" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bbr5-eth0" Sep 10 23:23:27.396495 containerd[1527]: 2025-09-10 23:23:27.351 [INFO][4824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali843e530e283 ContainerID="8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" Namespace="calico-system" Pod="csi-node-driver-5bbr5" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bbr5-eth0" Sep 10 23:23:27.396495 containerd[1527]: 2025-09-10 23:23:27.357 [INFO][4824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" Namespace="calico-system" Pod="csi-node-driver-5bbr5" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bbr5-eth0" Sep 10 23:23:27.396495 containerd[1527]: 2025-09-10 23:23:27.357 [INFO][4824] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" Namespace="calico-system" Pod="csi-node-driver-5bbr5" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bbr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--5bbr5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"41e05e41-b646-4243-9c89-ac4eb4228756", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53", Pod:"csi-node-driver-5bbr5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali843e530e283", MAC:"4e:92:46:00:44:99", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:27.396495 containerd[1527]: 2025-09-10 23:23:27.371 [INFO][4824] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" Namespace="calico-system" Pod="csi-node-driver-5bbr5" WorkloadEndpoint="localhost-k8s-csi--node--driver--5bbr5-eth0" Sep 10 23:23:27.450008 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:27.453571 systemd-networkd[1425]: calidab7c96e1b3: Link UP Sep 10 23:23:27.454318 systemd-networkd[1425]: calidab7c96e1b3: Gained carrier Sep 10 23:23:27.457577 containerd[1527]: time="2025-09-10T23:23:27.457375580Z" level=info msg="connecting to shim 8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53" address="unix:///run/containerd/s/99d5183f548c0e8ee999f2b314debb57aebf22fcfb121f3f12022dfc052313de" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:27.503349 systemd[1]: Started cri-containerd-8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53.scope - libcontainer container 8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53. Sep 10 23:23:27.505367 systemd-networkd[1425]: cali2700f0958a2: Gained IPv6LL Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.112 [INFO][4814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0 calico-apiserver-577f7d6b4- calico-apiserver b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0 821 0 2025-09-10 23:23:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:577f7d6b4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-577f7d6b4-whjln eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidab7c96e1b3 [] [] }} ContainerID="42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" Namespace="calico-apiserver" Pod="calico-apiserver-577f7d6b4-whjln" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f7d6b4--whjln-" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.113 [INFO][4814] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" Namespace="calico-apiserver" Pod="calico-apiserver-577f7d6b4-whjln" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.195 [INFO][4876] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" HandleID="k8s-pod-network.42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" Workload="localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.195 [INFO][4876] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" HandleID="k8s-pod-network.42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" Workload="localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c320), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-577f7d6b4-whjln", "timestamp":"2025-09-10 23:23:27.195024998 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.195 [INFO][4876] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.343 [INFO][4876] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.344 [INFO][4876] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.383 [INFO][4876] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" host="localhost" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.395 [INFO][4876] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.406 [INFO][4876] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.409 [INFO][4876] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.414 [INFO][4876] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.414 [INFO][4876] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" host="localhost" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.419 [INFO][4876] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.425 [INFO][4876] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" host="localhost" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.434 [INFO][4876] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.137/26] block=192.168.88.128/26 handle="k8s-pod-network.42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" host="localhost" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.434 [INFO][4876] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.137/26] handle="k8s-pod-network.42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" host="localhost" Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.434 [INFO][4876] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 10 23:23:27.509348 containerd[1527]: 2025-09-10 23:23:27.434 [INFO][4876] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.137/26] IPv6=[] ContainerID="42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" HandleID="k8s-pod-network.42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" Workload="localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0" Sep 10 23:23:27.509816 containerd[1527]: 2025-09-10 23:23:27.448 [INFO][4814] cni-plugin/k8s.go 418: Populated endpoint ContainerID="42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" Namespace="calico-apiserver" Pod="calico-apiserver-577f7d6b4-whjln" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0", GenerateName:"calico-apiserver-577f7d6b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577f7d6b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-577f7d6b4-whjln", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidab7c96e1b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:27.509816 containerd[1527]: 2025-09-10 23:23:27.448 [INFO][4814] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.137/32] ContainerID="42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" Namespace="calico-apiserver" Pod="calico-apiserver-577f7d6b4-whjln" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0" Sep 10 23:23:27.509816 containerd[1527]: 2025-09-10 23:23:27.448 [INFO][4814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidab7c96e1b3 ContainerID="42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" Namespace="calico-apiserver" Pod="calico-apiserver-577f7d6b4-whjln" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0" Sep 10 23:23:27.509816 containerd[1527]: 2025-09-10 23:23:27.455 [INFO][4814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" Namespace="calico-apiserver" Pod="calico-apiserver-577f7d6b4-whjln" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0" Sep 10 23:23:27.509816 containerd[1527]: 2025-09-10 23:23:27.457 [INFO][4814] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" Namespace="calico-apiserver" Pod="calico-apiserver-577f7d6b4-whjln" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0", GenerateName:"calico-apiserver-577f7d6b4-", Namespace:"calico-apiserver", SelfLink:"", UID:"b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0", ResourceVersion:"821", Generation:0, CreationTimestamp:time.Date(2025, time.September, 10, 23, 23, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"577f7d6b4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a", Pod:"calico-apiserver-577f7d6b4-whjln", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidab7c96e1b3", MAC:"62:0b:16:d5:98:8d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 10 23:23:27.509816 containerd[1527]: 2025-09-10 23:23:27.501 [INFO][4814] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" Namespace="calico-apiserver" Pod="calico-apiserver-577f7d6b4-whjln" WorkloadEndpoint="localhost-k8s-calico--apiserver--577f7d6b4--whjln-eth0" Sep 10 23:23:27.562205 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:27.604661 containerd[1527]: time="2025-09-10T23:23:27.604618856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-67f6d9bb89-rw98g,Uid:15b1b1a9-9293-4e1d-8c21-5986b0198355,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc\"" Sep 10 23:23:27.701353 containerd[1527]: time="2025-09-10T23:23:27.701304334Z" level=info msg="CreateContainer within sandbox \"9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:23:27.705653 containerd[1527]: time="2025-09-10T23:23:27.705603349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-5bbr5,Uid:41e05e41-b646-4243-9c89-ac4eb4228756,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53\"" Sep 10 23:23:27.717582 containerd[1527]: time="2025-09-10T23:23:27.717534107Z" level=info msg="Container 5cc5705dd479a10e624ca8669ea7724ce7a06c962d022b6534d755e16241a281: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:27.738896 containerd[1527]: time="2025-09-10T23:23:27.738850589Z" level=info msg="CreateContainer within sandbox \"9f5c98bb548865b37da3cf40ce86bfceb92c5faba11c47f6487d1cd832c3b2fc\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5cc5705dd479a10e624ca8669ea7724ce7a06c962d022b6534d755e16241a281\"" Sep 10 23:23:27.740126 containerd[1527]: time="2025-09-10T23:23:27.740046353Z" level=info msg="StartContainer for \"5cc5705dd479a10e624ca8669ea7724ce7a06c962d022b6534d755e16241a281\"" Sep 10 23:23:27.740126 containerd[1527]: time="2025-09-10T23:23:27.741058646Z" level=info msg="connecting to shim 5cc5705dd479a10e624ca8669ea7724ce7a06c962d022b6534d755e16241a281" address="unix:///run/containerd/s/2da7935ec011a175f8f51bba3b01d9ee29f8590b375ef79efa678d99ead8807d" protocol=ttrpc version=3 Sep 10 23:23:27.750383 containerd[1527]: time="2025-09-10T23:23:27.750338431Z" level=info msg="connecting to shim 42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a" address="unix:///run/containerd/s/f829e31759931ac35ada84edde2d991c942df636115e3998fcd25213f223eaca" namespace=k8s.io protocol=ttrpc version=3 Sep 10 23:23:27.776406 systemd[1]: Started cri-containerd-5cc5705dd479a10e624ca8669ea7724ce7a06c962d022b6534d755e16241a281.scope - libcontainer container 5cc5705dd479a10e624ca8669ea7724ce7a06c962d022b6534d755e16241a281. Sep 10 23:23:27.798376 systemd[1]: Started cri-containerd-42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a.scope - libcontainer container 42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a. Sep 10 23:23:27.817117 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 23:23:27.853029 containerd[1527]: time="2025-09-10T23:23:27.852929799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-577f7d6b4-whjln,Uid:b5586149-7cf4-4fb0-87d1-9bd24bd8e6f0,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a\"" Sep 10 23:23:27.860088 containerd[1527]: time="2025-09-10T23:23:27.860037493Z" level=info msg="CreateContainer within sandbox \"42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 10 23:23:27.866691 containerd[1527]: time="2025-09-10T23:23:27.866646542Z" level=info msg="StartContainer for \"5cc5705dd479a10e624ca8669ea7724ce7a06c962d022b6534d755e16241a281\" returns successfully" Sep 10 23:23:27.878937 containerd[1527]: time="2025-09-10T23:23:27.878890074Z" level=info msg="Container 53c5ab4e395bdab8b18259868becae615cb40458428caa566a72889e103bff65: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:27.888878 containerd[1527]: time="2025-09-10T23:23:27.888802408Z" level=info msg="CreateContainer within sandbox \"42215f45bacf9cfc89f80ca98784b6e51c50b247651bcd00e274b9640d41528a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"53c5ab4e395bdab8b18259868becae615cb40458428caa566a72889e103bff65\"" Sep 10 23:23:27.890759 containerd[1527]: time="2025-09-10T23:23:27.890608156Z" level=info msg="StartContainer for \"53c5ab4e395bdab8b18259868becae615cb40458428caa566a72889e103bff65\"" Sep 10 23:23:27.892418 containerd[1527]: time="2025-09-10T23:23:27.892370657Z" level=info msg="connecting to shim 53c5ab4e395bdab8b18259868becae615cb40458428caa566a72889e103bff65" address="unix:///run/containerd/s/f829e31759931ac35ada84edde2d991c942df636115e3998fcd25213f223eaca" protocol=ttrpc version=3 Sep 10 23:23:27.925373 systemd[1]: Started cri-containerd-53c5ab4e395bdab8b18259868becae615cb40458428caa566a72889e103bff65.scope - libcontainer container 53c5ab4e395bdab8b18259868becae615cb40458428caa566a72889e103bff65. Sep 10 23:23:28.041055 containerd[1527]: time="2025-09-10T23:23:28.040885317Z" level=info msg="StartContainer for \"53c5ab4e395bdab8b18259868becae615cb40458428caa566a72889e103bff65\" returns successfully" Sep 10 23:23:28.048611 containerd[1527]: time="2025-09-10T23:23:28.048558398Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:28.049578 containerd[1527]: time="2025-09-10T23:23:28.049200465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 10 23:23:28.051161 containerd[1527]: time="2025-09-10T23:23:28.050443313Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:28.056511 containerd[1527]: time="2025-09-10T23:23:28.056470119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:28.057960 containerd[1527]: time="2025-09-10T23:23:28.057497811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.977151987s" Sep 10 23:23:28.057960 containerd[1527]: time="2025-09-10T23:23:28.057545179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 10 23:23:28.058691 containerd[1527]: time="2025-09-10T23:23:28.058671767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 10 23:23:28.076628 containerd[1527]: time="2025-09-10T23:23:28.076585598Z" level=info msg="CreateContainer within sandbox \"f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 10 23:23:28.099423 containerd[1527]: time="2025-09-10T23:23:28.099378284Z" level=info msg="Container 82f27b99d4dcf57f42a2010c2b627690821d7f8d84e39573c1d6cc4bc10980ad: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:28.107383 containerd[1527]: time="2025-09-10T23:23:28.107332252Z" level=info msg="CreateContainer within sandbox \"f87a76f0354336f84cd8cbed694e196087ecfc477a870db192b56f88ebd6e986\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"82f27b99d4dcf57f42a2010c2b627690821d7f8d84e39573c1d6cc4bc10980ad\"" Sep 10 23:23:28.108534 containerd[1527]: time="2025-09-10T23:23:28.108503088Z" level=info msg="StartContainer for \"82f27b99d4dcf57f42a2010c2b627690821d7f8d84e39573c1d6cc4bc10980ad\"" Sep 10 23:23:28.112223 containerd[1527]: time="2025-09-10T23:23:28.111395171Z" level=info msg="connecting to shim 82f27b99d4dcf57f42a2010c2b627690821d7f8d84e39573c1d6cc4bc10980ad" address="unix:///run/containerd/s/9f9758e06f955c5c23f96ef12f66fd65f02a008a8f5102fc73c982722b4b7131" protocol=ttrpc version=3 Sep 10 23:23:28.157362 systemd[1]: Started cri-containerd-82f27b99d4dcf57f42a2010c2b627690821d7f8d84e39573c1d6cc4bc10980ad.scope - libcontainer container 82f27b99d4dcf57f42a2010c2b627690821d7f8d84e39573c1d6cc4bc10980ad. Sep 10 23:23:28.219898 containerd[1527]: time="2025-09-10T23:23:28.219854442Z" level=info msg="StartContainer for \"82f27b99d4dcf57f42a2010c2b627690821d7f8d84e39573c1d6cc4bc10980ad\" returns successfully" Sep 10 23:23:28.281837 kubelet[2678]: I0910 23:23:28.280953 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-577f7d6b4-whjln" podStartSLOduration=27.280936881 podStartE2EDuration="27.280936881s" podCreationTimestamp="2025-09-10 23:23:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:23:28.26559992 +0000 UTC m=+43.346087616" watchObservedRunningTime="2025-09-10 23:23:28.280936881 +0000 UTC m=+43.361424537" Sep 10 23:23:28.281837 kubelet[2678]: I0910 23:23:28.281116 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57b95bdd5f-cfktv" podStartSLOduration=18.742487389 podStartE2EDuration="23.281111671s" podCreationTimestamp="2025-09-10 23:23:05 +0000 UTC" firstStartedPulling="2025-09-10 23:23:23.519885138 +0000 UTC m=+38.600372834" lastFinishedPulling="2025-09-10 23:23:28.05850942 +0000 UTC m=+43.138997116" observedRunningTime="2025-09-10 23:23:28.280852667 +0000 UTC m=+43.361340363" watchObservedRunningTime="2025-09-10 23:23:28.281111671 +0000 UTC m=+43.361599367" Sep 10 23:23:28.352966 containerd[1527]: time="2025-09-10T23:23:28.352842368Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82f27b99d4dcf57f42a2010c2b627690821d7f8d84e39573c1d6cc4bc10980ad\" id:\"4b0e7c667dff3b62d4a7853e3b766321794d3372fb90058ade3d50aa81daed3d\" pid:5189 exit_status:1 exited_at:{seconds:1757546608 nanos:352299798}" Sep 10 23:23:28.656311 systemd-networkd[1425]: cali843e530e283: Gained IPv6LL Sep 10 23:23:28.656642 systemd-networkd[1425]: cali6851f0d1874: Gained IPv6LL Sep 10 23:23:29.255718 kubelet[2678]: I0910 23:23:29.255686 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:29.256341 kubelet[2678]: I0910 23:23:29.256071 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:29.296341 systemd-networkd[1425]: calidab7c96e1b3: Gained IPv6LL Sep 10 23:23:29.395611 containerd[1527]: time="2025-09-10T23:23:29.395572096Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82f27b99d4dcf57f42a2010c2b627690821d7f8d84e39573c1d6cc4bc10980ad\" id:\"1eead83121371a0f7e980b55d0335b69711ba57fdc08f9bb257da6912ebe28eb\" pid:5216 exited_at:{seconds:1757546609 nanos:395283849}" Sep 10 23:23:29.428035 kubelet[2678]: I0910 23:23:29.427962 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-67f6d9bb89-rw98g" podStartSLOduration=29.427828886 podStartE2EDuration="29.427828886s" podCreationTimestamp="2025-09-10 23:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 23:23:28.293157882 +0000 UTC m=+43.373645578" watchObservedRunningTime="2025-09-10 23:23:29.427828886 +0000 UTC m=+44.508316582" Sep 10 23:23:30.746684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3988202971.mount: Deactivated successfully. Sep 10 23:23:31.335055 containerd[1527]: time="2025-09-10T23:23:31.334721198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:31.337156 containerd[1527]: time="2025-09-10T23:23:31.337062565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 10 23:23:31.339019 containerd[1527]: time="2025-09-10T23:23:31.338971304Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:31.343659 containerd[1527]: time="2025-09-10T23:23:31.342939686Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:31.344433 containerd[1527]: time="2025-09-10T23:23:31.344205965Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 3.2854323s" Sep 10 23:23:31.344433 containerd[1527]: time="2025-09-10T23:23:31.344333745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 10 23:23:31.347382 containerd[1527]: time="2025-09-10T23:23:31.347288528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 10 23:23:31.351581 containerd[1527]: time="2025-09-10T23:23:31.351405293Z" level=info msg="CreateContainer within sandbox \"1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 10 23:23:31.361335 containerd[1527]: time="2025-09-10T23:23:31.361288443Z" level=info msg="Container b195044e35d154bb53328cc94dd1564f28dadf780e919024c4b984e3baaf7684: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:31.373394 containerd[1527]: time="2025-09-10T23:23:31.373271082Z" level=info msg="CreateContainer within sandbox \"1f180ffe6c1e588004a42384372ef14350b853f03f35231a5e9fa1a75f35ced9\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"b195044e35d154bb53328cc94dd1564f28dadf780e919024c4b984e3baaf7684\"" Sep 10 23:23:31.374055 containerd[1527]: time="2025-09-10T23:23:31.374020079Z" level=info msg="StartContainer for \"b195044e35d154bb53328cc94dd1564f28dadf780e919024c4b984e3baaf7684\"" Sep 10 23:23:31.376331 containerd[1527]: time="2025-09-10T23:23:31.376299436Z" level=info msg="connecting to shim b195044e35d154bb53328cc94dd1564f28dadf780e919024c4b984e3baaf7684" address="unix:///run/containerd/s/e4215e28e48f4fd3931b5648904d515c1ca69229e9e7a73029e427190374e050" protocol=ttrpc version=3 Sep 10 23:23:31.407309 systemd[1]: Started cri-containerd-b195044e35d154bb53328cc94dd1564f28dadf780e919024c4b984e3baaf7684.scope - libcontainer container b195044e35d154bb53328cc94dd1564f28dadf780e919024c4b984e3baaf7684. Sep 10 23:23:31.517487 containerd[1527]: time="2025-09-10T23:23:31.517388516Z" level=info msg="StartContainer for \"b195044e35d154bb53328cc94dd1564f28dadf780e919024c4b984e3baaf7684\" returns successfully" Sep 10 23:23:31.927385 systemd[1]: Started sshd@8-10.0.0.24:22-10.0.0.1:33514.service - OpenSSH per-connection server daemon (10.0.0.1:33514). Sep 10 23:23:32.010203 sshd[5272]: Accepted publickey for core from 10.0.0.1 port 33514 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:32.011548 sshd-session[5272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:32.017683 systemd-logind[1498]: New session 9 of user core. Sep 10 23:23:32.024352 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 23:23:32.306618 kubelet[2678]: I0910 23:23:32.306469 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-gh44n" podStartSLOduration=22.266366837 podStartE2EDuration="28.306446711s" podCreationTimestamp="2025-09-10 23:23:04 +0000 UTC" firstStartedPulling="2025-09-10 23:23:25.306294951 +0000 UTC m=+40.386782647" lastFinishedPulling="2025-09-10 23:23:31.346374825 +0000 UTC m=+46.426862521" observedRunningTime="2025-09-10 23:23:32.294438264 +0000 UTC m=+47.374925960" watchObservedRunningTime="2025-09-10 23:23:32.306446711 +0000 UTC m=+47.386934407" Sep 10 23:23:32.311773 sshd[5276]: Connection closed by 10.0.0.1 port 33514 Sep 10 23:23:32.312283 sshd-session[5272]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:32.319490 systemd[1]: sshd@8-10.0.0.24:22-10.0.0.1:33514.service: Deactivated successfully. Sep 10 23:23:32.321883 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 23:23:32.323856 systemd-logind[1498]: Session 9 logged out. Waiting for processes to exit. Sep 10 23:23:32.326817 systemd-logind[1498]: Removed session 9. Sep 10 23:23:32.850425 containerd[1527]: time="2025-09-10T23:23:32.850379923Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:32.851398 containerd[1527]: time="2025-09-10T23:23:32.851339071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 10 23:23:32.852032 containerd[1527]: time="2025-09-10T23:23:32.851972088Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:32.854219 containerd[1527]: time="2025-09-10T23:23:32.854190429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:32.855011 containerd[1527]: time="2025-09-10T23:23:32.854867973Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.507510914s" Sep 10 23:23:32.855011 containerd[1527]: time="2025-09-10T23:23:32.854900098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 10 23:23:32.864155 containerd[1527]: time="2025-09-10T23:23:32.864093552Z" level=info msg="CreateContainer within sandbox \"8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 10 23:23:32.875155 containerd[1527]: time="2025-09-10T23:23:32.874685581Z" level=info msg="Container f7aaee7aba0a26664e2be57b16064a1e30aae8618f994df6f0879eac7cf9752a: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:32.881833 containerd[1527]: time="2025-09-10T23:23:32.881788793Z" level=info msg="CreateContainer within sandbox \"8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"f7aaee7aba0a26664e2be57b16064a1e30aae8618f994df6f0879eac7cf9752a\"" Sep 10 23:23:32.882293 containerd[1527]: time="2025-09-10T23:23:32.882271508Z" level=info msg="StartContainer for \"f7aaee7aba0a26664e2be57b16064a1e30aae8618f994df6f0879eac7cf9752a\"" Sep 10 23:23:32.884023 containerd[1527]: time="2025-09-10T23:23:32.883970289Z" level=info msg="connecting to shim f7aaee7aba0a26664e2be57b16064a1e30aae8618f994df6f0879eac7cf9752a" address="unix:///run/containerd/s/99d5183f548c0e8ee999f2b314debb57aebf22fcfb121f3f12022dfc052313de" protocol=ttrpc version=3 Sep 10 23:23:32.911322 systemd[1]: Started cri-containerd-f7aaee7aba0a26664e2be57b16064a1e30aae8618f994df6f0879eac7cf9752a.scope - libcontainer container f7aaee7aba0a26664e2be57b16064a1e30aae8618f994df6f0879eac7cf9752a. Sep 10 23:23:32.977130 containerd[1527]: time="2025-09-10T23:23:32.977085169Z" level=info msg="StartContainer for \"f7aaee7aba0a26664e2be57b16064a1e30aae8618f994df6f0879eac7cf9752a\" returns successfully" Sep 10 23:23:32.980994 containerd[1527]: time="2025-09-10T23:23:32.980842627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 10 23:23:33.290023 kubelet[2678]: I0910 23:23:33.289993 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:34.060644 containerd[1527]: time="2025-09-10T23:23:34.060596803Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:34.069112 containerd[1527]: time="2025-09-10T23:23:34.061329712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 10 23:23:34.069243 containerd[1527]: time="2025-09-10T23:23:34.063057288Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:34.069243 containerd[1527]: time="2025-09-10T23:23:34.065878266Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.084890417s" Sep 10 23:23:34.069243 containerd[1527]: time="2025-09-10T23:23:34.069202600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 10 23:23:34.069848 containerd[1527]: time="2025-09-10T23:23:34.069820891Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 23:23:34.073603 containerd[1527]: time="2025-09-10T23:23:34.073555325Z" level=info msg="CreateContainer within sandbox \"8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 10 23:23:34.084343 containerd[1527]: time="2025-09-10T23:23:34.084297999Z" level=info msg="Container 434ee5968b6831fdafabcc8dd8ab586228853070dc2f2b07c18b2aa52c5148f5: CDI devices from CRI Config.CDIDevices: []" Sep 10 23:23:34.091236 containerd[1527]: time="2025-09-10T23:23:34.091084326Z" level=info msg="CreateContainer within sandbox \"8fa316480d86d6f8807f70f8fa3676a787081cedd7bc9e6705cf562c774aeb53\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"434ee5968b6831fdafabcc8dd8ab586228853070dc2f2b07c18b2aa52c5148f5\"" Sep 10 23:23:34.091646 containerd[1527]: time="2025-09-10T23:23:34.091621726Z" level=info msg="StartContainer for \"434ee5968b6831fdafabcc8dd8ab586228853070dc2f2b07c18b2aa52c5148f5\"" Sep 10 23:23:34.093311 containerd[1527]: time="2025-09-10T23:23:34.093276011Z" level=info msg="connecting to shim 434ee5968b6831fdafabcc8dd8ab586228853070dc2f2b07c18b2aa52c5148f5" address="unix:///run/containerd/s/99d5183f548c0e8ee999f2b314debb57aebf22fcfb121f3f12022dfc052313de" protocol=ttrpc version=3 Sep 10 23:23:34.119330 systemd[1]: Started cri-containerd-434ee5968b6831fdafabcc8dd8ab586228853070dc2f2b07c18b2aa52c5148f5.scope - libcontainer container 434ee5968b6831fdafabcc8dd8ab586228853070dc2f2b07c18b2aa52c5148f5. Sep 10 23:23:34.159067 containerd[1527]: time="2025-09-10T23:23:34.159029447Z" level=info msg="StartContainer for \"434ee5968b6831fdafabcc8dd8ab586228853070dc2f2b07c18b2aa52c5148f5\" returns successfully" Sep 10 23:23:34.310616 kubelet[2678]: I0910 23:23:34.310378 2678 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-5bbr5" podStartSLOduration=22.947499311 podStartE2EDuration="29.310356739s" podCreationTimestamp="2025-09-10 23:23:05 +0000 UTC" firstStartedPulling="2025-09-10 23:23:27.707056877 +0000 UTC m=+42.787544573" lastFinishedPulling="2025-09-10 23:23:34.069914305 +0000 UTC m=+49.150402001" observedRunningTime="2025-09-10 23:23:34.309870627 +0000 UTC m=+49.390358363" watchObservedRunningTime="2025-09-10 23:23:34.310356739 +0000 UTC m=+49.390844435" Sep 10 23:23:35.088351 kubelet[2678]: I0910 23:23:35.088280 2678 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 10 23:23:35.099567 kubelet[2678]: I0910 23:23:35.099526 2678 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 10 23:23:37.325990 systemd[1]: Started sshd@9-10.0.0.24:22-10.0.0.1:33518.service - OpenSSH per-connection server daemon (10.0.0.1:33518). Sep 10 23:23:37.397449 sshd[5376]: Accepted publickey for core from 10.0.0.1 port 33518 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:37.399969 sshd-session[5376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:37.404231 systemd-logind[1498]: New session 10 of user core. Sep 10 23:23:37.415332 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 23:23:37.604047 sshd[5385]: Connection closed by 10.0.0.1 port 33518 Sep 10 23:23:37.605721 sshd-session[5376]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:37.613847 systemd[1]: sshd@9-10.0.0.24:22-10.0.0.1:33518.service: Deactivated successfully. Sep 10 23:23:37.617047 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 23:23:37.618259 systemd-logind[1498]: Session 10 logged out. Waiting for processes to exit. Sep 10 23:23:37.621674 systemd[1]: Started sshd@10-10.0.0.24:22-10.0.0.1:33520.service - OpenSSH per-connection server daemon (10.0.0.1:33520). Sep 10 23:23:37.623355 systemd-logind[1498]: Removed session 10. Sep 10 23:23:37.686024 sshd[5405]: Accepted publickey for core from 10.0.0.1 port 33520 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:37.688220 sshd-session[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:37.695206 systemd-logind[1498]: New session 11 of user core. Sep 10 23:23:37.702305 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 23:23:37.948466 sshd[5409]: Connection closed by 10.0.0.1 port 33520 Sep 10 23:23:37.949528 sshd-session[5405]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:37.961709 systemd[1]: sshd@10-10.0.0.24:22-10.0.0.1:33520.service: Deactivated successfully. Sep 10 23:23:37.964187 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 23:23:37.966994 systemd-logind[1498]: Session 11 logged out. Waiting for processes to exit. Sep 10 23:23:37.971450 systemd[1]: Started sshd@11-10.0.0.24:22-10.0.0.1:33528.service - OpenSSH per-connection server daemon (10.0.0.1:33528). Sep 10 23:23:37.976566 systemd-logind[1498]: Removed session 11. Sep 10 23:23:38.034609 sshd[5421]: Accepted publickey for core from 10.0.0.1 port 33528 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:38.035915 sshd-session[5421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:38.040744 systemd-logind[1498]: New session 12 of user core. Sep 10 23:23:38.048323 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 23:23:38.241176 sshd[5424]: Connection closed by 10.0.0.1 port 33528 Sep 10 23:23:38.241647 sshd-session[5421]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:38.247730 systemd-logind[1498]: Session 12 logged out. Waiting for processes to exit. Sep 10 23:23:38.248540 systemd[1]: sshd@11-10.0.0.24:22-10.0.0.1:33528.service: Deactivated successfully. Sep 10 23:23:38.252995 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 23:23:38.258440 systemd-logind[1498]: Removed session 12. Sep 10 23:23:40.159448 kubelet[2678]: I0910 23:23:40.159353 2678 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 10 23:23:40.245488 containerd[1527]: time="2025-09-10T23:23:40.245257239Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b195044e35d154bb53328cc94dd1564f28dadf780e919024c4b984e3baaf7684\" id:\"79a572b332518cda3c61b60c5e080e4953ec73f36ca754b05f5a44ef8b3daf1c\" pid:5448 exit_status:1 exited_at:{seconds:1757546620 nanos:244653197}" Sep 10 23:23:40.316618 containerd[1527]: time="2025-09-10T23:23:40.315317148Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b195044e35d154bb53328cc94dd1564f28dadf780e919024c4b984e3baaf7684\" id:\"a5d96e1c3951fe201f75f4834aee05bb32b9bf0bf95a9ba75af922f008635a46\" pid:5472 exit_status:1 exited_at:{seconds:1757546620 nanos:315053832}" Sep 10 23:23:43.252979 systemd[1]: Started sshd@12-10.0.0.24:22-10.0.0.1:33312.service - OpenSSH per-connection server daemon (10.0.0.1:33312). Sep 10 23:23:43.319837 sshd[5486]: Accepted publickey for core from 10.0.0.1 port 33312 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:43.323038 sshd-session[5486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:43.330208 systemd-logind[1498]: New session 13 of user core. Sep 10 23:23:43.339356 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 23:23:43.514031 sshd[5489]: Connection closed by 10.0.0.1 port 33312 Sep 10 23:23:43.514876 sshd-session[5486]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:43.528744 systemd[1]: sshd@12-10.0.0.24:22-10.0.0.1:33312.service: Deactivated successfully. Sep 10 23:23:43.530617 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 23:23:43.531347 systemd-logind[1498]: Session 13 logged out. Waiting for processes to exit. Sep 10 23:23:43.533884 systemd[1]: Started sshd@13-10.0.0.24:22-10.0.0.1:33318.service - OpenSSH per-connection server daemon (10.0.0.1:33318). Sep 10 23:23:43.535761 systemd-logind[1498]: Removed session 13. Sep 10 23:23:43.584857 sshd[5502]: Accepted publickey for core from 10.0.0.1 port 33318 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:43.586374 sshd-session[5502]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:43.591587 systemd-logind[1498]: New session 14 of user core. Sep 10 23:23:43.600364 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 23:23:43.834677 sshd[5505]: Connection closed by 10.0.0.1 port 33318 Sep 10 23:23:43.835769 sshd-session[5502]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:43.846111 systemd[1]: sshd@13-10.0.0.24:22-10.0.0.1:33318.service: Deactivated successfully. Sep 10 23:23:43.847957 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 23:23:43.848709 systemd-logind[1498]: Session 14 logged out. Waiting for processes to exit. Sep 10 23:23:43.851524 systemd[1]: Started sshd@14-10.0.0.24:22-10.0.0.1:33334.service - OpenSSH per-connection server daemon (10.0.0.1:33334). Sep 10 23:23:43.852070 systemd-logind[1498]: Removed session 14. Sep 10 23:23:43.929620 sshd[5517]: Accepted publickey for core from 10.0.0.1 port 33334 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:43.931267 sshd-session[5517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:43.936649 systemd-logind[1498]: New session 15 of user core. Sep 10 23:23:43.948356 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 23:23:44.626031 sshd[5520]: Connection closed by 10.0.0.1 port 33334 Sep 10 23:23:44.626515 sshd-session[5517]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:44.643556 systemd[1]: sshd@14-10.0.0.24:22-10.0.0.1:33334.service: Deactivated successfully. Sep 10 23:23:44.645457 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 23:23:44.654503 systemd-logind[1498]: Session 15 logged out. Waiting for processes to exit. Sep 10 23:23:44.658348 systemd-logind[1498]: Removed session 15. Sep 10 23:23:44.663755 systemd[1]: Started sshd@15-10.0.0.24:22-10.0.0.1:33338.service - OpenSSH per-connection server daemon (10.0.0.1:33338). Sep 10 23:23:44.723223 sshd[5539]: Accepted publickey for core from 10.0.0.1 port 33338 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:44.725001 sshd-session[5539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:44.730711 systemd-logind[1498]: New session 16 of user core. Sep 10 23:23:44.736112 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 23:23:45.166538 sshd[5544]: Connection closed by 10.0.0.1 port 33338 Sep 10 23:23:45.168261 sshd-session[5539]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:45.176948 systemd[1]: sshd@15-10.0.0.24:22-10.0.0.1:33338.service: Deactivated successfully. Sep 10 23:23:45.180229 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 23:23:45.182220 systemd-logind[1498]: Session 16 logged out. Waiting for processes to exit. Sep 10 23:23:45.185267 systemd[1]: Started sshd@16-10.0.0.24:22-10.0.0.1:33354.service - OpenSSH per-connection server daemon (10.0.0.1:33354). Sep 10 23:23:45.187344 systemd-logind[1498]: Removed session 16. Sep 10 23:23:45.247240 sshd[5558]: Accepted publickey for core from 10.0.0.1 port 33354 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:45.248591 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:45.254895 systemd-logind[1498]: New session 17 of user core. Sep 10 23:23:45.265354 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 23:23:45.439575 sshd[5561]: Connection closed by 10.0.0.1 port 33354 Sep 10 23:23:45.439855 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:45.445321 systemd[1]: sshd@16-10.0.0.24:22-10.0.0.1:33354.service: Deactivated successfully. Sep 10 23:23:45.448451 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 23:23:45.450031 systemd-logind[1498]: Session 17 logged out. Waiting for processes to exit. Sep 10 23:23:45.451446 systemd-logind[1498]: Removed session 17. Sep 10 23:23:50.455855 systemd[1]: Started sshd@17-10.0.0.24:22-10.0.0.1:37316.service - OpenSSH per-connection server daemon (10.0.0.1:37316). Sep 10 23:23:50.526176 sshd[5580]: Accepted publickey for core from 10.0.0.1 port 37316 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:50.529053 sshd-session[5580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:50.535975 systemd-logind[1498]: New session 18 of user core. Sep 10 23:23:50.543317 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 23:23:50.722125 sshd[5583]: Connection closed by 10.0.0.1 port 37316 Sep 10 23:23:50.722415 sshd-session[5580]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:50.727809 systemd[1]: sshd@17-10.0.0.24:22-10.0.0.1:37316.service: Deactivated successfully. Sep 10 23:23:50.729963 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 23:23:50.731231 systemd-logind[1498]: Session 18 logged out. Waiting for processes to exit. Sep 10 23:23:50.733957 systemd-logind[1498]: Removed session 18. Sep 10 23:23:51.181538 containerd[1527]: time="2025-09-10T23:23:51.181497684Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4875ef8629b196357d714d3d9ca5c8533c911f4b25f0e7a660a064dfc65017de\" id:\"023ec3e7ab3910783f78e34438a76bcd374f5acd1e8456093940eff2ef417651\" pid:5606 exited_at:{seconds:1757546631 nanos:180906960}" Sep 10 23:23:55.738846 systemd[1]: Started sshd@18-10.0.0.24:22-10.0.0.1:37326.service - OpenSSH per-connection server daemon (10.0.0.1:37326). Sep 10 23:23:55.802314 sshd[5623]: Accepted publickey for core from 10.0.0.1 port 37326 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:23:55.803670 sshd-session[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:23:55.807744 systemd-logind[1498]: New session 19 of user core. Sep 10 23:23:55.812305 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 23:23:56.012625 sshd[5626]: Connection closed by 10.0.0.1 port 37326 Sep 10 23:23:56.012922 sshd-session[5623]: pam_unix(sshd:session): session closed for user core Sep 10 23:23:56.018206 systemd-logind[1498]: Session 19 logged out. Waiting for processes to exit. Sep 10 23:23:56.018518 systemd[1]: sshd@18-10.0.0.24:22-10.0.0.1:37326.service: Deactivated successfully. Sep 10 23:23:56.020463 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 23:23:56.022769 systemd-logind[1498]: Removed session 19. Sep 10 23:23:59.296433 containerd[1527]: time="2025-09-10T23:23:59.296383371Z" level=info msg="TaskExit event in podsandbox handler container_id:\"82f27b99d4dcf57f42a2010c2b627690821d7f8d84e39573c1d6cc4bc10980ad\" id:\"d81b7ef412a428cd2638f555c955b3cc45a852eff0fa356b83ff0fed6736d5c8\" pid:5656 exited_at:{seconds:1757546639 nanos:296129831}" Sep 10 23:24:01.023392 systemd[1]: Started sshd@19-10.0.0.24:22-10.0.0.1:34018.service - OpenSSH per-connection server daemon (10.0.0.1:34018). Sep 10 23:24:01.068117 sshd[5668]: Accepted publickey for core from 10.0.0.1 port 34018 ssh2: RSA SHA256:01/8/GJm96qRmhpjxlCxzORm+n+531eu8FILDPAeTPk Sep 10 23:24:01.069529 sshd-session[5668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 23:24:01.073323 systemd-logind[1498]: New session 20 of user core. Sep 10 23:24:01.082315 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 23:24:01.206508 sshd[5671]: Connection closed by 10.0.0.1 port 34018 Sep 10 23:24:01.207022 sshd-session[5668]: pam_unix(sshd:session): session closed for user core Sep 10 23:24:01.210534 systemd[1]: sshd@19-10.0.0.24:22-10.0.0.1:34018.service: Deactivated successfully. Sep 10 23:24:01.212841 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 23:24:01.214592 systemd-logind[1498]: Session 20 logged out. Waiting for processes to exit. Sep 10 23:24:01.215761 systemd-logind[1498]: Removed session 20.