Sep 5 06:03:53.756417 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 5 06:03:53.756436 kernel: Linux version 6.12.44-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 5 04:25:57 -00 2025 Sep 5 06:03:53.756445 kernel: KASLR enabled Sep 5 06:03:53.756451 kernel: efi: EFI v2.7 by EDK II Sep 5 06:03:53.756456 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 5 06:03:53.756461 kernel: random: crng init done Sep 5 06:03:53.756468 kernel: secureboot: Secure boot disabled Sep 5 06:03:53.756473 kernel: ACPI: Early table checksum verification disabled Sep 5 06:03:53.756479 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 5 06:03:53.756486 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 5 06:03:53.756492 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:03:53.756497 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:03:53.756503 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:03:53.756521 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:03:53.756528 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:03:53.756535 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:03:53.756542 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:03:53.756547 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:03:53.756553 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 5 06:03:53.756559 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 5 06:03:53.756565 kernel: ACPI: Use ACPI SPCR as default console: No Sep 5 06:03:53.756571 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 06:03:53.756577 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 5 06:03:53.756583 kernel: Zone ranges: Sep 5 06:03:53.756589 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 06:03:53.756596 kernel: DMA32 empty Sep 5 06:03:53.756602 kernel: Normal empty Sep 5 06:03:53.756608 kernel: Device empty Sep 5 06:03:53.756614 kernel: Movable zone start for each node Sep 5 06:03:53.756619 kernel: Early memory node ranges Sep 5 06:03:53.756625 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 5 06:03:53.756631 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 5 06:03:53.756637 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 5 06:03:53.756643 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 5 06:03:53.756649 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 5 06:03:53.756655 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 5 06:03:53.756660 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 5 06:03:53.756668 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 5 06:03:53.756674 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 5 06:03:53.756680 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 5 06:03:53.756688 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 5 06:03:53.756694 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 5 06:03:53.756701 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 5 06:03:53.756709 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 5 06:03:53.756715 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 5 06:03:53.756732 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 5 06:03:53.756739 kernel: psci: probing for conduit method from ACPI. Sep 5 06:03:53.756753 kernel: psci: PSCIv1.1 detected in firmware. Sep 5 06:03:53.756759 kernel: psci: Using standard PSCI v0.2 function IDs Sep 5 06:03:53.756765 kernel: psci: Trusted OS migration not required Sep 5 06:03:53.756772 kernel: psci: SMC Calling Convention v1.1 Sep 5 06:03:53.756778 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 5 06:03:53.756785 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 5 06:03:53.756793 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 5 06:03:53.756800 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 5 06:03:53.756806 kernel: Detected PIPT I-cache on CPU0 Sep 5 06:03:53.756812 kernel: CPU features: detected: GIC system register CPU interface Sep 5 06:03:53.756819 kernel: CPU features: detected: Spectre-v4 Sep 5 06:03:53.756825 kernel: CPU features: detected: Spectre-BHB Sep 5 06:03:53.756832 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 5 06:03:53.756838 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 5 06:03:53.756844 kernel: CPU features: detected: ARM erratum 1418040 Sep 5 06:03:53.756850 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 5 06:03:53.756857 kernel: alternatives: applying boot alternatives Sep 5 06:03:53.756864 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ad0560d5d82b42c8405832aa39f4f52a20b919c503afe4e7ecc72adb2e451fae Sep 5 06:03:53.756872 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 5 06:03:53.756879 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 5 06:03:53.756889 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 5 06:03:53.756899 kernel: Fallback order for Node 0: 0 Sep 5 06:03:53.756907 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 5 06:03:53.756914 kernel: Policy zone: DMA Sep 5 06:03:53.756920 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 5 06:03:53.756929 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 5 06:03:53.756938 kernel: software IO TLB: area num 4. Sep 5 06:03:53.756945 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 5 06:03:53.756952 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 5 06:03:53.756962 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 5 06:03:53.756968 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 5 06:03:53.756975 kernel: rcu: RCU event tracing is enabled. Sep 5 06:03:53.756982 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 5 06:03:53.756989 kernel: Trampoline variant of Tasks RCU enabled. Sep 5 06:03:53.756995 kernel: Tracing variant of Tasks RCU enabled. Sep 5 06:03:53.757002 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 5 06:03:53.757008 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 5 06:03:53.757015 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 06:03:53.757021 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 5 06:03:53.757027 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 5 06:03:53.757035 kernel: GICv3: 256 SPIs implemented Sep 5 06:03:53.757041 kernel: GICv3: 0 Extended SPIs implemented Sep 5 06:03:53.757047 kernel: Root IRQ handler: gic_handle_irq Sep 5 06:03:53.757054 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 5 06:03:53.757060 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 5 06:03:53.757066 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 5 06:03:53.757072 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 5 06:03:53.757079 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 5 06:03:53.757085 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 5 06:03:53.757092 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 5 06:03:53.757098 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 5 06:03:53.757104 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 5 06:03:53.757112 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 06:03:53.757118 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 5 06:03:53.757125 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 5 06:03:53.757131 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 5 06:03:53.757138 kernel: arm-pv: using stolen time PV Sep 5 06:03:53.757144 kernel: Console: colour dummy device 80x25 Sep 5 06:03:53.757151 kernel: ACPI: Core revision 20240827 Sep 5 06:03:53.757158 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 5 06:03:53.757164 kernel: pid_max: default: 32768 minimum: 301 Sep 5 06:03:53.757171 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 5 06:03:53.757178 kernel: landlock: Up and running. Sep 5 06:03:53.757185 kernel: SELinux: Initializing. Sep 5 06:03:53.757191 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 06:03:53.757198 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 5 06:03:53.757204 kernel: rcu: Hierarchical SRCU implementation. Sep 5 06:03:53.757211 kernel: rcu: Max phase no-delay instances is 400. Sep 5 06:03:53.757218 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 5 06:03:53.757224 kernel: Remapping and enabling EFI services. Sep 5 06:03:53.757231 kernel: smp: Bringing up secondary CPUs ... Sep 5 06:03:53.757243 kernel: Detected PIPT I-cache on CPU1 Sep 5 06:03:53.757250 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 5 06:03:53.757257 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 5 06:03:53.757264 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 06:03:53.757271 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 5 06:03:53.757278 kernel: Detected PIPT I-cache on CPU2 Sep 5 06:03:53.757285 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 5 06:03:53.757292 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 5 06:03:53.757300 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 06:03:53.757307 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 5 06:03:53.757314 kernel: Detected PIPT I-cache on CPU3 Sep 5 06:03:53.757321 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 5 06:03:53.757328 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 5 06:03:53.757335 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 5 06:03:53.757341 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 5 06:03:53.757348 kernel: smp: Brought up 1 node, 4 CPUs Sep 5 06:03:53.757355 kernel: SMP: Total of 4 processors activated. Sep 5 06:03:53.757363 kernel: CPU: All CPU(s) started at EL1 Sep 5 06:03:53.757370 kernel: CPU features: detected: 32-bit EL0 Support Sep 5 06:03:53.757377 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 5 06:03:53.757383 kernel: CPU features: detected: Common not Private translations Sep 5 06:03:53.757390 kernel: CPU features: detected: CRC32 instructions Sep 5 06:03:53.757397 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 5 06:03:53.757404 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 5 06:03:53.757411 kernel: CPU features: detected: LSE atomic instructions Sep 5 06:03:53.757417 kernel: CPU features: detected: Privileged Access Never Sep 5 06:03:53.757425 kernel: CPU features: detected: RAS Extension Support Sep 5 06:03:53.757432 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 5 06:03:53.757439 kernel: alternatives: applying system-wide alternatives Sep 5 06:03:53.757446 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 5 06:03:53.757454 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 5 06:03:53.757460 kernel: devtmpfs: initialized Sep 5 06:03:53.757467 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 5 06:03:53.757474 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 5 06:03:53.757481 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 5 06:03:53.757489 kernel: 0 pages in range for non-PLT usage Sep 5 06:03:53.757496 kernel: 508576 pages in range for PLT usage Sep 5 06:03:53.757503 kernel: pinctrl core: initialized pinctrl subsystem Sep 5 06:03:53.757509 kernel: SMBIOS 3.0.0 present. Sep 5 06:03:53.757516 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 5 06:03:53.757523 kernel: DMI: Memory slots populated: 1/1 Sep 5 06:03:53.757530 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 5 06:03:53.757537 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 5 06:03:53.757544 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 5 06:03:53.757552 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 5 06:03:53.757559 kernel: audit: initializing netlink subsys (disabled) Sep 5 06:03:53.757566 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 5 06:03:53.757573 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 5 06:03:53.757580 kernel: cpuidle: using governor menu Sep 5 06:03:53.757586 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 5 06:03:53.757593 kernel: ASID allocator initialised with 32768 entries Sep 5 06:03:53.757600 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 5 06:03:53.757607 kernel: Serial: AMBA PL011 UART driver Sep 5 06:03:53.757615 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 5 06:03:53.757622 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 5 06:03:53.757629 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 5 06:03:53.757636 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 5 06:03:53.757643 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 5 06:03:53.757650 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 5 06:03:53.757656 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 5 06:03:53.757663 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 5 06:03:53.757670 kernel: ACPI: Added _OSI(Module Device) Sep 5 06:03:53.757678 kernel: ACPI: Added _OSI(Processor Device) Sep 5 06:03:53.757686 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 5 06:03:53.757692 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 5 06:03:53.757699 kernel: ACPI: Interpreter enabled Sep 5 06:03:53.757706 kernel: ACPI: Using GIC for interrupt routing Sep 5 06:03:53.757713 kernel: ACPI: MCFG table detected, 1 entries Sep 5 06:03:53.757719 kernel: ACPI: CPU0 has been hot-added Sep 5 06:03:53.757798 kernel: ACPI: CPU1 has been hot-added Sep 5 06:03:53.757805 kernel: ACPI: CPU2 has been hot-added Sep 5 06:03:53.757812 kernel: ACPI: CPU3 has been hot-added Sep 5 06:03:53.757821 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 5 06:03:53.757828 kernel: printk: legacy console [ttyAMA0] enabled Sep 5 06:03:53.757835 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 5 06:03:53.757958 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 5 06:03:53.758022 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 5 06:03:53.758079 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 5 06:03:53.758135 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 5 06:03:53.758192 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 5 06:03:53.758201 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 5 06:03:53.758208 kernel: PCI host bridge to bus 0000:00 Sep 5 06:03:53.758269 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 5 06:03:53.758321 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 5 06:03:53.758372 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 5 06:03:53.758422 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 5 06:03:53.758506 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 5 06:03:53.758576 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 5 06:03:53.758636 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 5 06:03:53.758694 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 5 06:03:53.758776 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 5 06:03:53.758836 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 5 06:03:53.758894 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 5 06:03:53.758954 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 5 06:03:53.759007 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 5 06:03:53.759058 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 5 06:03:53.759109 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 5 06:03:53.759118 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 5 06:03:53.759125 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 5 06:03:53.759133 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 5 06:03:53.759141 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 5 06:03:53.759148 kernel: iommu: Default domain type: Translated Sep 5 06:03:53.759155 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 5 06:03:53.759161 kernel: efivars: Registered efivars operations Sep 5 06:03:53.759168 kernel: vgaarb: loaded Sep 5 06:03:53.759175 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 5 06:03:53.759182 kernel: VFS: Disk quotas dquot_6.6.0 Sep 5 06:03:53.759189 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 5 06:03:53.759196 kernel: pnp: PnP ACPI init Sep 5 06:03:53.759266 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 5 06:03:53.759276 kernel: pnp: PnP ACPI: found 1 devices Sep 5 06:03:53.759283 kernel: NET: Registered PF_INET protocol family Sep 5 06:03:53.759290 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 5 06:03:53.759297 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 5 06:03:53.759304 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 5 06:03:53.759317 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 5 06:03:53.759324 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 5 06:03:53.759333 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 5 06:03:53.759340 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 06:03:53.759347 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 5 06:03:53.759353 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 5 06:03:53.759360 kernel: PCI: CLS 0 bytes, default 64 Sep 5 06:03:53.759367 kernel: kvm [1]: HYP mode not available Sep 5 06:03:53.759374 kernel: Initialise system trusted keyrings Sep 5 06:03:53.759381 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 5 06:03:53.759388 kernel: Key type asymmetric registered Sep 5 06:03:53.759396 kernel: Asymmetric key parser 'x509' registered Sep 5 06:03:53.759403 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 5 06:03:53.759410 kernel: io scheduler mq-deadline registered Sep 5 06:03:53.759417 kernel: io scheduler kyber registered Sep 5 06:03:53.759424 kernel: io scheduler bfq registered Sep 5 06:03:53.759431 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 5 06:03:53.759438 kernel: ACPI: button: Power Button [PWRB] Sep 5 06:03:53.759445 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 5 06:03:53.759507 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 5 06:03:53.759517 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 5 06:03:53.759524 kernel: thunder_xcv, ver 1.0 Sep 5 06:03:53.759531 kernel: thunder_bgx, ver 1.0 Sep 5 06:03:53.759538 kernel: nicpf, ver 1.0 Sep 5 06:03:53.759545 kernel: nicvf, ver 1.0 Sep 5 06:03:53.759611 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 5 06:03:53.759666 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-05T06:03:53 UTC (1757052233) Sep 5 06:03:53.759675 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 5 06:03:53.759683 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 5 06:03:53.759690 kernel: watchdog: NMI not fully supported Sep 5 06:03:53.759697 kernel: watchdog: Hard watchdog permanently disabled Sep 5 06:03:53.759704 kernel: NET: Registered PF_INET6 protocol family Sep 5 06:03:53.759711 kernel: Segment Routing with IPv6 Sep 5 06:03:53.759718 kernel: In-situ OAM (IOAM) with IPv6 Sep 5 06:03:53.759732 kernel: NET: Registered PF_PACKET protocol family Sep 5 06:03:53.759739 kernel: Key type dns_resolver registered Sep 5 06:03:53.759752 kernel: registered taskstats version 1 Sep 5 06:03:53.759759 kernel: Loading compiled-in X.509 certificates Sep 5 06:03:53.759769 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.44-flatcar: aa317a1d4cc75a128b85a6fc319190bc5853ac85' Sep 5 06:03:53.759776 kernel: Demotion targets for Node 0: null Sep 5 06:03:53.759783 kernel: Key type .fscrypt registered Sep 5 06:03:53.759790 kernel: Key type fscrypt-provisioning registered Sep 5 06:03:53.759797 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 5 06:03:53.759803 kernel: ima: Allocated hash algorithm: sha1 Sep 5 06:03:53.759810 kernel: ima: No architecture policies found Sep 5 06:03:53.759817 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 5 06:03:53.759825 kernel: clk: Disabling unused clocks Sep 5 06:03:53.759832 kernel: PM: genpd: Disabling unused power domains Sep 5 06:03:53.759839 kernel: Warning: unable to open an initial console. Sep 5 06:03:53.759846 kernel: Freeing unused kernel memory: 38912K Sep 5 06:03:53.759853 kernel: Run /init as init process Sep 5 06:03:53.759860 kernel: with arguments: Sep 5 06:03:53.759866 kernel: /init Sep 5 06:03:53.759873 kernel: with environment: Sep 5 06:03:53.759880 kernel: HOME=/ Sep 5 06:03:53.759888 kernel: TERM=linux Sep 5 06:03:53.759895 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 5 06:03:53.759903 systemd[1]: Successfully made /usr/ read-only. Sep 5 06:03:53.759913 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 06:03:53.759921 systemd[1]: Detected virtualization kvm. Sep 5 06:03:53.759928 systemd[1]: Detected architecture arm64. Sep 5 06:03:53.759935 systemd[1]: Running in initrd. Sep 5 06:03:53.759942 systemd[1]: No hostname configured, using default hostname. Sep 5 06:03:53.759951 systemd[1]: Hostname set to . Sep 5 06:03:53.759958 systemd[1]: Initializing machine ID from VM UUID. Sep 5 06:03:53.759966 systemd[1]: Queued start job for default target initrd.target. Sep 5 06:03:53.759973 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:03:53.759980 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:03:53.759988 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 5 06:03:53.759996 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 06:03:53.760003 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 5 06:03:53.760012 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 5 06:03:53.760021 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 5 06:03:53.760029 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 5 06:03:53.760036 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:03:53.760043 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:03:53.760051 systemd[1]: Reached target paths.target - Path Units. Sep 5 06:03:53.760058 systemd[1]: Reached target slices.target - Slice Units. Sep 5 06:03:53.760066 systemd[1]: Reached target swap.target - Swaps. Sep 5 06:03:53.760074 systemd[1]: Reached target timers.target - Timer Units. Sep 5 06:03:53.760081 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 06:03:53.760089 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 06:03:53.760096 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 5 06:03:53.760103 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 5 06:03:53.760111 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:03:53.760118 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 06:03:53.760127 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:03:53.760134 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 06:03:53.760142 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 5 06:03:53.760149 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 06:03:53.760156 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 5 06:03:53.760164 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 5 06:03:53.760171 systemd[1]: Starting systemd-fsck-usr.service... Sep 5 06:03:53.760179 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 06:03:53.760186 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 06:03:53.760194 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:03:53.760202 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:03:53.760210 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 5 06:03:53.760217 systemd[1]: Finished systemd-fsck-usr.service. Sep 5 06:03:53.760225 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 5 06:03:53.760249 systemd-journald[245]: Collecting audit messages is disabled. Sep 5 06:03:53.760267 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:03:53.760276 systemd-journald[245]: Journal started Sep 5 06:03:53.760295 systemd-journald[245]: Runtime Journal (/run/log/journal/5e9bedd217254ffb8dceb1dd46c6f49f) is 6M, max 48.5M, 42.4M free. Sep 5 06:03:53.753836 systemd-modules-load[246]: Inserted module 'overlay' Sep 5 06:03:53.763737 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 5 06:03:53.766289 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 06:03:53.768748 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 5 06:03:53.769798 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 5 06:03:53.770760 kernel: Bridge firewalling registered Sep 5 06:03:53.775887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 5 06:03:53.776998 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 06:03:53.780851 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 06:03:53.782133 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 06:03:53.786656 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 06:03:53.795756 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 06:03:53.796771 systemd-tmpfiles[276]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 5 06:03:53.796861 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:03:53.798678 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:03:53.800788 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:03:53.803694 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 5 06:03:53.805894 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 06:03:53.826355 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ad0560d5d82b42c8405832aa39f4f52a20b919c503afe4e7ecc72adb2e451fae Sep 5 06:03:53.839833 systemd-resolved[288]: Positive Trust Anchors: Sep 5 06:03:53.839851 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 06:03:53.839882 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 06:03:53.844564 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 5 06:03:53.846482 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 06:03:53.847419 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:03:53.891772 kernel: SCSI subsystem initialized Sep 5 06:03:53.896757 kernel: Loading iSCSI transport class v2.0-870. Sep 5 06:03:53.903766 kernel: iscsi: registered transport (tcp) Sep 5 06:03:53.915820 kernel: iscsi: registered transport (qla4xxx) Sep 5 06:03:53.915846 kernel: QLogic iSCSI HBA Driver Sep 5 06:03:53.931674 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 06:03:53.951799 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:03:53.953002 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 06:03:53.997117 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 5 06:03:53.998997 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 5 06:03:54.054757 kernel: raid6: neonx8 gen() 15723 MB/s Sep 5 06:03:54.071764 kernel: raid6: neonx4 gen() 15736 MB/s Sep 5 06:03:54.088751 kernel: raid6: neonx2 gen() 13164 MB/s Sep 5 06:03:54.105750 kernel: raid6: neonx1 gen() 10416 MB/s Sep 5 06:03:54.122746 kernel: raid6: int64x8 gen() 6877 MB/s Sep 5 06:03:54.139747 kernel: raid6: int64x4 gen() 7311 MB/s Sep 5 06:03:54.156757 kernel: raid6: int64x2 gen() 6084 MB/s Sep 5 06:03:54.173748 kernel: raid6: int64x1 gen() 5033 MB/s Sep 5 06:03:54.173769 kernel: raid6: using algorithm neonx4 gen() 15736 MB/s Sep 5 06:03:54.190759 kernel: raid6: .... xor() 12246 MB/s, rmw enabled Sep 5 06:03:54.190792 kernel: raid6: using neon recovery algorithm Sep 5 06:03:54.195751 kernel: xor: measuring software checksum speed Sep 5 06:03:54.195786 kernel: 8regs : 21613 MB/sec Sep 5 06:03:54.197248 kernel: 32regs : 19932 MB/sec Sep 5 06:03:54.197262 kernel: arm64_neon : 28128 MB/sec Sep 5 06:03:54.197271 kernel: xor: using function: arm64_neon (28128 MB/sec) Sep 5 06:03:54.248770 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 5 06:03:54.254015 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 5 06:03:54.257281 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:03:54.284539 systemd-udevd[499]: Using default interface naming scheme 'v255'. Sep 5 06:03:54.288573 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:03:54.290241 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 5 06:03:54.315992 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 5 06:03:54.336606 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 06:03:54.338563 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 06:03:54.396381 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:03:54.399201 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 5 06:03:54.446296 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 5 06:03:54.448925 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 5 06:03:54.454857 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 5 06:03:54.454887 kernel: GPT:9289727 != 19775487 Sep 5 06:03:54.454897 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 5 06:03:54.454906 kernel: GPT:9289727 != 19775487 Sep 5 06:03:54.456172 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 5 06:03:54.456210 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:03:54.460912 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 06:03:54.461906 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:03:54.465448 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:03:54.468849 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:03:54.488187 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 5 06:03:54.489393 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 5 06:03:54.497248 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 5 06:03:54.503754 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 5 06:03:54.504631 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 5 06:03:54.507743 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:03:54.524059 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 06:03:54.524995 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 06:03:54.526698 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:03:54.528533 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 06:03:54.530708 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 5 06:03:54.532133 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 5 06:03:54.551818 disk-uuid[594]: Primary Header is updated. Sep 5 06:03:54.551818 disk-uuid[594]: Secondary Entries is updated. Sep 5 06:03:54.551818 disk-uuid[594]: Secondary Header is updated. Sep 5 06:03:54.554746 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:03:54.558220 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 5 06:03:55.562947 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 5 06:03:55.563418 disk-uuid[597]: The operation has completed successfully. Sep 5 06:03:55.588372 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 5 06:03:55.588455 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 5 06:03:55.612159 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 5 06:03:55.640441 sh[614]: Success Sep 5 06:03:55.651954 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 5 06:03:55.651986 kernel: device-mapper: uevent: version 1.0.3 Sep 5 06:03:55.652777 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 5 06:03:55.659751 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 5 06:03:55.682422 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 5 06:03:55.684702 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 5 06:03:55.695568 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 5 06:03:55.700449 kernel: BTRFS: device fsid 9394a7fb-1948-4797-93d7-fc7ecccd6bdf devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (626) Sep 5 06:03:55.700470 kernel: BTRFS info (device dm-0): first mount of filesystem 9394a7fb-1948-4797-93d7-fc7ecccd6bdf Sep 5 06:03:55.700480 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 5 06:03:55.704742 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 5 06:03:55.704770 kernel: BTRFS info (device dm-0): enabling free space tree Sep 5 06:03:55.705568 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 5 06:03:55.706571 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 5 06:03:55.707610 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 5 06:03:55.708225 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 5 06:03:55.709534 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 5 06:03:55.729746 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (659) Sep 5 06:03:55.729878 kernel: BTRFS info (device vda6): first mount of filesystem ab813a37-c745-4f2a-8834-a13cfeeae891 Sep 5 06:03:55.731433 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 06:03:55.734032 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:03:55.734062 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:03:55.737826 kernel: BTRFS info (device vda6): last unmount of filesystem ab813a37-c745-4f2a-8834-a13cfeeae891 Sep 5 06:03:55.738413 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 5 06:03:55.739984 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 5 06:03:55.803792 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 06:03:55.806182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 06:03:55.841718 systemd-networkd[804]: lo: Link UP Sep 5 06:03:55.842396 systemd-networkd[804]: lo: Gained carrier Sep 5 06:03:55.843662 systemd-networkd[804]: Enumeration completed Sep 5 06:03:55.843568 ignition[700]: Ignition 2.22.0 Sep 5 06:03:55.843960 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 06:03:55.843575 ignition[700]: Stage: fetch-offline Sep 5 06:03:55.844043 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:03:55.843604 ignition[700]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:03:55.844047 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 06:03:55.843611 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:03:55.844946 systemd-networkd[804]: eth0: Link UP Sep 5 06:03:55.843689 ignition[700]: parsed url from cmdline: "" Sep 5 06:03:55.845030 systemd-networkd[804]: eth0: Gained carrier Sep 5 06:03:55.843692 ignition[700]: no config URL provided Sep 5 06:03:55.845039 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:03:55.843696 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Sep 5 06:03:55.845782 systemd[1]: Reached target network.target - Network. Sep 5 06:03:55.843702 ignition[700]: no config at "/usr/lib/ignition/user.ign" Sep 5 06:03:55.843719 ignition[700]: op(1): [started] loading QEMU firmware config module Sep 5 06:03:55.843746 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 5 06:03:55.854707 ignition[700]: op(1): [finished] loading QEMU firmware config module Sep 5 06:03:55.863780 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 06:03:55.899152 ignition[700]: parsing config with SHA512: 51c2fd7738cfd5fccf6840235bd4373433b45db1b9598a24821b97b419ca3945ebc66ec86fd2c001d44ffd3b34bf208d19bc859db513b89782f287921d9eeb88 Sep 5 06:03:55.904312 unknown[700]: fetched base config from "system" Sep 5 06:03:55.904325 unknown[700]: fetched user config from "qemu" Sep 5 06:03:55.904673 ignition[700]: fetch-offline: fetch-offline passed Sep 5 06:03:55.906557 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 06:03:55.904745 ignition[700]: Ignition finished successfully Sep 5 06:03:55.907957 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 5 06:03:55.908597 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 5 06:03:55.952756 ignition[812]: Ignition 2.22.0 Sep 5 06:03:55.952771 ignition[812]: Stage: kargs Sep 5 06:03:55.952905 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:03:55.952915 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:03:55.953598 ignition[812]: kargs: kargs passed Sep 5 06:03:55.953635 ignition[812]: Ignition finished successfully Sep 5 06:03:55.957276 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 5 06:03:55.959405 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 5 06:03:55.995042 ignition[819]: Ignition 2.22.0 Sep 5 06:03:55.995057 ignition[819]: Stage: disks Sep 5 06:03:55.995182 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 5 06:03:55.995191 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:03:55.997709 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 5 06:03:55.995895 ignition[819]: disks: disks passed Sep 5 06:03:55.999177 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 5 06:03:55.995940 ignition[819]: Ignition finished successfully Sep 5 06:03:56.000469 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 5 06:03:56.001651 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 06:03:56.003152 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 06:03:56.004352 systemd[1]: Reached target basic.target - Basic System. Sep 5 06:03:56.006660 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 5 06:03:56.034845 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 5 06:03:56.038914 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 5 06:03:56.040621 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 5 06:03:56.094617 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 5 06:03:56.095843 kernel: EXT4-fs (vda9): mounted filesystem f4f5d9cb-0abd-4bb7-89fa-b5d1beb281ac r/w with ordered data mode. Quota mode: none. Sep 5 06:03:56.095668 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 5 06:03:56.098162 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 06:03:56.099943 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 5 06:03:56.100701 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 5 06:03:56.100780 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 5 06:03:56.100804 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 06:03:56.107859 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 5 06:03:56.110865 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 5 06:03:56.114622 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (837) Sep 5 06:03:56.114648 kernel: BTRFS info (device vda6): first mount of filesystem ab813a37-c745-4f2a-8834-a13cfeeae891 Sep 5 06:03:56.114659 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 06:03:56.117743 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:03:56.117774 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:03:56.118815 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 06:03:56.142524 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Sep 5 06:03:56.146336 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Sep 5 06:03:56.149763 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Sep 5 06:03:56.153341 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Sep 5 06:03:56.212980 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 5 06:03:56.215898 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 5 06:03:56.217149 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 5 06:03:56.230612 kernel: BTRFS info (device vda6): last unmount of filesystem ab813a37-c745-4f2a-8834-a13cfeeae891 Sep 5 06:03:56.241964 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 5 06:03:56.256076 ignition[951]: INFO : Ignition 2.22.0 Sep 5 06:03:56.256076 ignition[951]: INFO : Stage: mount Sep 5 06:03:56.257305 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:03:56.257305 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:03:56.257305 ignition[951]: INFO : mount: mount passed Sep 5 06:03:56.257305 ignition[951]: INFO : Ignition finished successfully Sep 5 06:03:56.258983 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 5 06:03:56.261838 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 5 06:03:56.821826 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 5 06:03:56.823298 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 5 06:03:56.847750 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (964) Sep 5 06:03:56.847805 kernel: BTRFS info (device vda6): first mount of filesystem ab813a37-c745-4f2a-8834-a13cfeeae891 Sep 5 06:03:56.849336 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 5 06:03:56.851743 kernel: BTRFS info (device vda6): turning on async discard Sep 5 06:03:56.851764 kernel: BTRFS info (device vda6): enabling free space tree Sep 5 06:03:56.852671 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 5 06:03:56.886543 ignition[981]: INFO : Ignition 2.22.0 Sep 5 06:03:56.886543 ignition[981]: INFO : Stage: files Sep 5 06:03:56.887812 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:03:56.887812 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:03:56.887812 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Sep 5 06:03:56.890485 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 5 06:03:56.890485 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 5 06:03:56.892894 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 5 06:03:56.893908 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 5 06:03:56.893908 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 5 06:03:56.893363 unknown[981]: wrote ssh authorized keys file for user: core Sep 5 06:03:56.896989 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 06:03:56.896989 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 5 06:03:56.934522 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 5 06:03:57.455655 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 5 06:03:57.457264 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 5 06:03:57.457264 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 5 06:03:57.457264 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 5 06:03:57.457264 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 5 06:03:57.457264 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 06:03:57.457264 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 5 06:03:57.457264 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 06:03:57.457264 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 5 06:03:57.468329 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 06:03:57.468329 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 5 06:03:57.468329 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 06:03:57.468329 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 06:03:57.468329 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 06:03:57.468329 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 5 06:03:57.755837 systemd-networkd[804]: eth0: Gained IPv6LL Sep 5 06:03:57.773079 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 5 06:03:58.042526 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 5 06:03:58.042526 ignition[981]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 5 06:03:58.045816 ignition[981]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 06:03:58.048765 ignition[981]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 5 06:03:58.048765 ignition[981]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 5 06:03:58.048765 ignition[981]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 5 06:03:58.048765 ignition[981]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 06:03:58.048765 ignition[981]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 5 06:03:58.048765 ignition[981]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 5 06:03:58.048765 ignition[981]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 5 06:03:58.068704 ignition[981]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 06:03:58.071469 ignition[981]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 5 06:03:58.072668 ignition[981]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 5 06:03:58.072668 ignition[981]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 5 06:03:58.072668 ignition[981]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 5 06:03:58.072668 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 5 06:03:58.072668 ignition[981]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 5 06:03:58.072668 ignition[981]: INFO : files: files passed Sep 5 06:03:58.072668 ignition[981]: INFO : Ignition finished successfully Sep 5 06:03:58.073720 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 5 06:03:58.077842 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 5 06:03:58.080952 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 5 06:03:58.099483 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 5 06:03:58.099569 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 5 06:03:58.102251 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Sep 5 06:03:58.103522 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:03:58.103522 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:03:58.105854 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 5 06:03:58.105350 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 06:03:58.106846 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 5 06:03:58.109851 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 5 06:03:58.143347 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 5 06:03:58.143430 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 5 06:03:58.145187 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 5 06:03:58.146503 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 5 06:03:58.147883 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 5 06:03:58.148517 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 5 06:03:58.161160 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 06:03:58.163003 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 5 06:03:58.178131 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:03:58.179077 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:03:58.180712 systemd[1]: Stopped target timers.target - Timer Units. Sep 5 06:03:58.182268 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 5 06:03:58.182371 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 5 06:03:58.184463 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 5 06:03:58.186157 systemd[1]: Stopped target basic.target - Basic System. Sep 5 06:03:58.187430 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 5 06:03:58.188836 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 5 06:03:58.190313 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 5 06:03:58.191789 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 5 06:03:58.193437 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 5 06:03:58.194831 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 5 06:03:58.196317 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 5 06:03:58.197794 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 5 06:03:58.199284 systemd[1]: Stopped target swap.target - Swaps. Sep 5 06:03:58.200405 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 5 06:03:58.200503 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 5 06:03:58.202340 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:03:58.203810 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:03:58.205364 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 5 06:03:58.208787 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:03:58.209708 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 5 06:03:58.209827 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 5 06:03:58.212269 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 5 06:03:58.212371 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 5 06:03:58.213942 systemd[1]: Stopped target paths.target - Path Units. Sep 5 06:03:58.215343 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 5 06:03:58.218825 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:03:58.219888 systemd[1]: Stopped target slices.target - Slice Units. Sep 5 06:03:58.221467 systemd[1]: Stopped target sockets.target - Socket Units. Sep 5 06:03:58.222676 systemd[1]: iscsid.socket: Deactivated successfully. Sep 5 06:03:58.222779 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 5 06:03:58.224002 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 5 06:03:58.224071 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 5 06:03:58.225273 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 5 06:03:58.225374 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 5 06:03:58.226652 systemd[1]: ignition-files.service: Deactivated successfully. Sep 5 06:03:58.226768 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 5 06:03:58.228555 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 5 06:03:58.230107 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 5 06:03:58.230232 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:03:58.253039 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 5 06:03:58.253683 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 5 06:03:58.253827 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:03:58.255406 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 5 06:03:58.255514 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 5 06:03:58.260781 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 5 06:03:58.260858 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 5 06:03:58.266183 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 5 06:03:58.272186 ignition[1037]: INFO : Ignition 2.22.0 Sep 5 06:03:58.272186 ignition[1037]: INFO : Stage: umount Sep 5 06:03:58.273556 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 5 06:03:58.273556 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 5 06:03:58.273556 ignition[1037]: INFO : umount: umount passed Sep 5 06:03:58.273556 ignition[1037]: INFO : Ignition finished successfully Sep 5 06:03:58.275031 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 5 06:03:58.275134 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 5 06:03:58.276115 systemd[1]: Stopped target network.target - Network. Sep 5 06:03:58.278808 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 5 06:03:58.278874 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 5 06:03:58.279868 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 5 06:03:58.279908 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 5 06:03:58.281158 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 5 06:03:58.281199 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 5 06:03:58.282500 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 5 06:03:58.282533 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 5 06:03:58.284183 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 5 06:03:58.285462 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 5 06:03:58.291080 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 5 06:03:58.291189 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 5 06:03:58.294122 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 5 06:03:58.294341 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 5 06:03:58.295783 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 5 06:03:58.298139 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 5 06:03:58.298639 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 5 06:03:58.299851 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 5 06:03:58.299884 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:03:58.302067 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 5 06:03:58.303913 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 5 06:03:58.303969 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 5 06:03:58.305497 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 5 06:03:58.305537 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:03:58.307815 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 5 06:03:58.307866 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 5 06:03:58.309449 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 5 06:03:58.309491 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:03:58.311955 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:03:58.316046 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 5 06:03:58.316106 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 5 06:03:58.316380 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 5 06:03:58.316465 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 5 06:03:58.318918 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 5 06:03:58.318999 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 5 06:03:58.329263 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 5 06:03:58.330902 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:03:58.332139 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 5 06:03:58.332171 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 5 06:03:58.333576 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 5 06:03:58.333604 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:03:58.334972 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 5 06:03:58.335008 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 5 06:03:58.337222 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 5 06:03:58.337264 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 5 06:03:58.339239 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 5 06:03:58.339280 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 5 06:03:58.342098 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 5 06:03:58.343803 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 5 06:03:58.343862 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:03:58.346427 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 5 06:03:58.346465 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:03:58.349061 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 5 06:03:58.349101 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:03:58.352421 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 5 06:03:58.352467 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 5 06:03:58.352499 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 5 06:03:58.352703 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 5 06:03:58.359818 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 5 06:03:58.364220 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 5 06:03:58.364316 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 5 06:03:58.366038 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 5 06:03:58.368076 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 5 06:03:58.398807 systemd[1]: Switching root. Sep 5 06:03:58.427584 systemd-journald[245]: Journal stopped Sep 5 06:03:59.147870 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 5 06:03:59.147926 kernel: SELinux: policy capability network_peer_controls=1 Sep 5 06:03:59.147943 kernel: SELinux: policy capability open_perms=1 Sep 5 06:03:59.147953 kernel: SELinux: policy capability extended_socket_class=1 Sep 5 06:03:59.147962 kernel: SELinux: policy capability always_check_network=0 Sep 5 06:03:59.147971 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 5 06:03:59.147980 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 5 06:03:59.147992 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 5 06:03:59.148001 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 5 06:03:59.148009 kernel: SELinux: policy capability userspace_initial_context=0 Sep 5 06:03:59.148022 systemd[1]: Successfully loaded SELinux policy in 62.015ms. Sep 5 06:03:59.148039 kernel: audit: type=1403 audit(1757052238.608:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 5 06:03:59.148053 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 4.967ms. Sep 5 06:03:59.148068 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 5 06:03:59.148084 systemd[1]: Detected virtualization kvm. Sep 5 06:03:59.148094 systemd[1]: Detected architecture arm64. Sep 5 06:03:59.148104 systemd[1]: Detected first boot. Sep 5 06:03:59.148115 systemd[1]: Initializing machine ID from VM UUID. Sep 5 06:03:59.148124 zram_generator::config[1082]: No configuration found. Sep 5 06:03:59.148139 kernel: NET: Registered PF_VSOCK protocol family Sep 5 06:03:59.148148 systemd[1]: Populated /etc with preset unit settings. Sep 5 06:03:59.148159 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 5 06:03:59.148168 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 5 06:03:59.148178 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 5 06:03:59.148188 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 5 06:03:59.148198 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 5 06:03:59.148207 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 5 06:03:59.148217 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 5 06:03:59.148228 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 5 06:03:59.148238 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 5 06:03:59.148248 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 5 06:03:59.148258 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 5 06:03:59.148269 systemd[1]: Created slice user.slice - User and Session Slice. Sep 5 06:03:59.148279 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 5 06:03:59.148289 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 5 06:03:59.148298 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 5 06:03:59.148316 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 5 06:03:59.148327 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 5 06:03:59.148337 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 5 06:03:59.148348 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 5 06:03:59.148357 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 5 06:03:59.148367 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 5 06:03:59.148378 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 5 06:03:59.148388 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 5 06:03:59.148399 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 5 06:03:59.148409 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 5 06:03:59.148419 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 5 06:03:59.148429 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 5 06:03:59.148439 systemd[1]: Reached target slices.target - Slice Units. Sep 5 06:03:59.148448 systemd[1]: Reached target swap.target - Swaps. Sep 5 06:03:59.148460 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 5 06:03:59.148469 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 5 06:03:59.148479 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 5 06:03:59.148490 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 5 06:03:59.148500 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 5 06:03:59.148510 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 5 06:03:59.148520 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 5 06:03:59.148531 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 5 06:03:59.148541 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 5 06:03:59.148550 systemd[1]: Mounting media.mount - External Media Directory... Sep 5 06:03:59.148560 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 5 06:03:59.148570 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 5 06:03:59.148582 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 5 06:03:59.148592 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 5 06:03:59.148602 systemd[1]: Reached target machines.target - Containers. Sep 5 06:03:59.148612 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 5 06:03:59.148622 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:03:59.148631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 5 06:03:59.148641 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 5 06:03:59.148651 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:03:59.148661 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 06:03:59.148672 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:03:59.148681 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 5 06:03:59.148691 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:03:59.148705 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 5 06:03:59.148730 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 5 06:03:59.148744 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 5 06:03:59.148754 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 5 06:03:59.148763 systemd[1]: Stopped systemd-fsck-usr.service. Sep 5 06:03:59.148776 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:03:59.148786 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 5 06:03:59.148796 kernel: loop: module loaded Sep 5 06:03:59.148805 kernel: fuse: init (API version 7.41) Sep 5 06:03:59.148814 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 5 06:03:59.148824 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 5 06:03:59.148834 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 5 06:03:59.148844 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 5 06:03:59.148854 kernel: ACPI: bus type drm_connector registered Sep 5 06:03:59.148865 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 5 06:03:59.148875 systemd[1]: verity-setup.service: Deactivated successfully. Sep 5 06:03:59.148885 systemd[1]: Stopped verity-setup.service. Sep 5 06:03:59.148895 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 5 06:03:59.148905 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 5 06:03:59.148916 systemd[1]: Mounted media.mount - External Media Directory. Sep 5 06:03:59.148926 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 5 06:03:59.148936 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 5 06:03:59.148967 systemd-journald[1150]: Collecting audit messages is disabled. Sep 5 06:03:59.148990 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 5 06:03:59.149000 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 5 06:03:59.149010 systemd-journald[1150]: Journal started Sep 5 06:03:59.149029 systemd-journald[1150]: Runtime Journal (/run/log/journal/5e9bedd217254ffb8dceb1dd46c6f49f) is 6M, max 48.5M, 42.4M free. Sep 5 06:03:58.954901 systemd[1]: Queued start job for default target multi-user.target. Sep 5 06:03:58.966702 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 5 06:03:58.967078 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 5 06:03:59.153117 systemd[1]: Started systemd-journald.service - Journal Service. Sep 5 06:03:59.153881 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 5 06:03:59.155108 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 5 06:03:59.155269 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 5 06:03:59.156451 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:03:59.156607 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:03:59.157807 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 06:03:59.157972 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 06:03:59.158981 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:03:59.159129 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:03:59.160373 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 5 06:03:59.160538 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 5 06:03:59.161770 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:03:59.161937 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:03:59.163004 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 5 06:03:59.164258 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 5 06:03:59.165573 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 5 06:03:59.166893 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 5 06:03:59.178274 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 5 06:03:59.180242 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 5 06:03:59.181959 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 5 06:03:59.182879 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 5 06:03:59.182903 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 5 06:03:59.184450 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 5 06:03:59.195485 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 5 06:03:59.196484 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:03:59.197658 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 5 06:03:59.199395 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 5 06:03:59.200432 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 06:03:59.201332 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 5 06:03:59.202260 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 06:03:59.204127 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 5 06:03:59.208285 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 5 06:03:59.212053 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 5 06:03:59.215794 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 5 06:03:59.217089 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 5 06:03:59.218745 kernel: loop0: detected capacity change from 0 to 119320 Sep 5 06:03:59.219033 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 5 06:03:59.222294 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 5 06:03:59.223838 systemd-journald[1150]: Time spent on flushing to /var/log/journal/5e9bedd217254ffb8dceb1dd46c6f49f is 16.560ms for 893 entries. Sep 5 06:03:59.223838 systemd-journald[1150]: System Journal (/var/log/journal/5e9bedd217254ffb8dceb1dd46c6f49f) is 8M, max 195.6M, 187.6M free. Sep 5 06:03:59.251122 systemd-journald[1150]: Received client request to flush runtime journal. Sep 5 06:03:59.251169 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 5 06:03:59.251188 kernel: loop1: detected capacity change from 0 to 211168 Sep 5 06:03:59.223628 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 5 06:03:59.232275 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 5 06:03:59.235591 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 5 06:03:59.244867 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 5 06:03:59.249029 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 5 06:03:59.254070 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 5 06:03:59.260624 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 5 06:03:59.271416 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 5 06:03:59.271436 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 5 06:03:59.274473 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 5 06:03:59.280506 kernel: loop2: detected capacity change from 0 to 100608 Sep 5 06:03:59.299758 kernel: loop3: detected capacity change from 0 to 119320 Sep 5 06:03:59.304754 kernel: loop4: detected capacity change from 0 to 211168 Sep 5 06:03:59.310818 kernel: loop5: detected capacity change from 0 to 100608 Sep 5 06:03:59.314144 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 5 06:03:59.314486 (sd-merge)[1221]: Merged extensions into '/usr'. Sep 5 06:03:59.318295 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Sep 5 06:03:59.318313 systemd[1]: Reloading... Sep 5 06:03:59.376915 zram_generator::config[1243]: No configuration found. Sep 5 06:03:59.459980 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 5 06:03:59.523475 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 5 06:03:59.523651 systemd[1]: Reloading finished in 204 ms. Sep 5 06:03:59.552755 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 5 06:03:59.553871 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 5 06:03:59.564993 systemd[1]: Starting ensure-sysext.service... Sep 5 06:03:59.566559 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 5 06:03:59.574807 systemd[1]: Reload requested from client PID 1282 ('systemctl') (unit ensure-sysext.service)... Sep 5 06:03:59.574907 systemd[1]: Reloading... Sep 5 06:03:59.580412 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 5 06:03:59.580442 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 5 06:03:59.580692 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 5 06:03:59.580988 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 5 06:03:59.581576 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 5 06:03:59.581820 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 5 06:03:59.581866 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 5 06:03:59.583995 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 06:03:59.584010 systemd-tmpfiles[1283]: Skipping /boot Sep 5 06:03:59.589463 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 5 06:03:59.589483 systemd-tmpfiles[1283]: Skipping /boot Sep 5 06:03:59.626783 zram_generator::config[1316]: No configuration found. Sep 5 06:03:59.749220 systemd[1]: Reloading finished in 174 ms. Sep 5 06:03:59.768130 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 5 06:03:59.778810 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 5 06:03:59.785822 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 06:03:59.787882 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 5 06:03:59.803082 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 5 06:03:59.805706 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 5 06:03:59.810987 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 5 06:03:59.813117 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 5 06:03:59.818026 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:03:59.825636 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:03:59.828008 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:03:59.832224 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:03:59.833160 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:03:59.833271 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:03:59.837061 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 5 06:03:59.839006 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 5 06:03:59.841371 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:03:59.841519 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:03:59.844301 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 5 06:03:59.845666 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:03:59.845998 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:03:59.847403 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:03:59.847536 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:03:59.848106 systemd-udevd[1351]: Using default interface naming scheme 'v255'. Sep 5 06:03:59.857112 augenrules[1380]: No rules Sep 5 06:03:59.857586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:03:59.858972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:03:59.860804 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:03:59.871952 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:03:59.872781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:03:59.872894 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:03:59.873908 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 5 06:03:59.875443 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 06:03:59.876492 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 5 06:03:59.879510 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 06:03:59.879688 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 06:03:59.880920 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 5 06:03:59.884692 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 5 06:03:59.886499 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:03:59.887523 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:03:59.888836 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:03:59.888994 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:03:59.891266 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:03:59.891405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:03:59.920002 systemd[1]: Finished ensure-sysext.service. Sep 5 06:03:59.924094 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 5 06:03:59.929894 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 06:03:59.930798 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 5 06:03:59.932009 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 5 06:03:59.939079 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 5 06:03:59.942114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 5 06:03:59.944184 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 5 06:03:59.946048 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 5 06:03:59.946098 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 5 06:03:59.947579 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 5 06:03:59.952025 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 5 06:03:59.953063 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 5 06:03:59.954963 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 5 06:03:59.960014 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 5 06:03:59.968055 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 5 06:03:59.969859 augenrules[1428]: /sbin/augenrules: No change Sep 5 06:03:59.973402 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 5 06:03:59.974833 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 5 06:03:59.977194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 5 06:03:59.977365 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 5 06:03:59.978961 augenrules[1457]: No rules Sep 5 06:03:59.979085 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 5 06:03:59.979231 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 5 06:03:59.980499 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 06:03:59.981202 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 06:03:59.990169 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 5 06:03:59.990235 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 5 06:03:59.999872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 5 06:04:00.004867 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 5 06:04:00.026396 systemd-resolved[1349]: Positive Trust Anchors: Sep 5 06:04:00.026425 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 5 06:04:00.026457 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 5 06:04:00.028811 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 5 06:04:00.030867 systemd[1]: Reached target time-set.target - System Time Set. Sep 5 06:04:00.034566 systemd-resolved[1349]: Defaulting to hostname 'linux'. Sep 5 06:04:00.035656 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 5 06:04:00.037924 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 5 06:04:00.038923 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 5 06:04:00.039886 systemd[1]: Reached target sysinit.target - System Initialization. Sep 5 06:04:00.040709 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 5 06:04:00.041603 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 5 06:04:00.043267 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 5 06:04:00.044177 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 5 06:04:00.045310 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 5 06:04:00.046267 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 5 06:04:00.046296 systemd[1]: Reached target paths.target - Path Units. Sep 5 06:04:00.046975 systemd[1]: Reached target timers.target - Timer Units. Sep 5 06:04:00.048305 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 5 06:04:00.050478 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 5 06:04:00.053229 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 5 06:04:00.054790 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 5 06:04:00.055801 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 5 06:04:00.058957 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 5 06:04:00.060094 systemd-networkd[1436]: lo: Link UP Sep 5 06:04:00.060105 systemd-networkd[1436]: lo: Gained carrier Sep 5 06:04:00.060148 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 5 06:04:00.061102 systemd-networkd[1436]: Enumeration completed Sep 5 06:04:00.061593 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:04:00.061604 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 5 06:04:00.061658 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 5 06:04:00.062298 systemd-networkd[1436]: eth0: Link UP Sep 5 06:04:00.062475 systemd-networkd[1436]: eth0: Gained carrier Sep 5 06:04:00.062497 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 5 06:04:00.062881 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 5 06:04:00.063827 systemd[1]: Reached target network.target - Network. Sep 5 06:04:00.064507 systemd[1]: Reached target sockets.target - Socket Units. Sep 5 06:04:00.065406 systemd[1]: Reached target basic.target - Basic System. Sep 5 06:04:00.066209 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 5 06:04:00.066240 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 5 06:04:00.067339 systemd[1]: Starting containerd.service - containerd container runtime... Sep 5 06:04:00.069953 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 5 06:04:00.071977 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 5 06:04:00.081797 systemd-networkd[1436]: eth0: DHCPv4 address 10.0.0.144/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 5 06:04:00.082463 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. Sep 5 06:04:00.547624 systemd-resolved[1349]: Clock change detected. Flushing caches. Sep 5 06:04:00.547660 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 5 06:04:00.547708 systemd-timesyncd[1441]: Initial clock synchronization to Fri 2025-09-05 06:04:00.547582 UTC. Sep 5 06:04:00.549584 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 5 06:04:00.551636 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 5 06:04:00.552488 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 5 06:04:00.553348 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 5 06:04:00.558603 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 5 06:04:00.559936 jq[1490]: false Sep 5 06:04:00.560411 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 5 06:04:00.564628 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 5 06:04:00.569458 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 5 06:04:00.572556 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 5 06:04:00.575049 extend-filesystems[1493]: Found /dev/vda6 Sep 5 06:04:00.575141 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 5 06:04:00.577520 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 5 06:04:00.577905 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 5 06:04:00.579288 systemd[1]: Starting update-engine.service - Update Engine... Sep 5 06:04:00.582551 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 5 06:04:00.584529 extend-filesystems[1493]: Found /dev/vda9 Sep 5 06:04:00.587745 extend-filesystems[1493]: Checking size of /dev/vda9 Sep 5 06:04:00.591800 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 5 06:04:00.593129 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 5 06:04:00.593484 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 5 06:04:00.595559 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 5 06:04:00.595725 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 5 06:04:00.602837 extend-filesystems[1493]: Resized partition /dev/vda9 Sep 5 06:04:00.607700 extend-filesystems[1528]: resize2fs 1.47.2 (1-Jan-2025) Sep 5 06:04:00.614041 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 5 06:04:00.614936 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 5 06:04:00.619564 jq[1507]: true Sep 5 06:04:00.624179 tar[1517]: linux-arm64/LICENSE Sep 5 06:04:00.628097 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 5 06:04:00.632178 systemd[1]: motdgen.service: Deactivated successfully. Sep 5 06:04:00.637590 tar[1517]: linux-arm64/helm Sep 5 06:04:00.637617 update_engine[1506]: I20250905 06:04:00.628461 1506 main.cc:92] Flatcar Update Engine starting Sep 5 06:04:00.632362 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 5 06:04:00.644418 jq[1534]: true Sep 5 06:04:00.644041 dbus-daemon[1479]: [system] SELinux support is enabled Sep 5 06:04:00.645603 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 5 06:04:00.646405 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 5 06:04:00.647410 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 5 06:04:00.652195 update_engine[1506]: I20250905 06:04:00.652136 1506 update_check_scheduler.cc:74] Next update check in 10m37s Sep 5 06:04:00.653738 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 5 06:04:00.653875 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 5 06:04:00.655403 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 5 06:04:00.655419 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 5 06:04:00.658947 extend-filesystems[1528]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 5 06:04:00.658947 extend-filesystems[1528]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 5 06:04:00.658947 extend-filesystems[1528]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 5 06:04:00.657575 systemd[1]: Started update-engine.service - Update Engine. Sep 5 06:04:00.676621 extend-filesystems[1493]: Resized filesystem in /dev/vda9 Sep 5 06:04:00.661154 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 5 06:04:00.671898 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 5 06:04:00.672070 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 5 06:04:00.705139 bash[1557]: Updated "/home/core/.ssh/authorized_keys" Sep 5 06:04:00.706793 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 5 06:04:00.710878 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 5 06:04:00.762526 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 5 06:04:00.789026 locksmithd[1540]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 5 06:04:00.792974 systemd-logind[1502]: Watching system buttons on /dev/input/event0 (Power Button) Sep 5 06:04:00.793162 systemd-logind[1502]: New seat seat0. Sep 5 06:04:00.799813 systemd[1]: Started systemd-logind.service - User Login Management. Sep 5 06:04:00.816196 containerd[1527]: time="2025-09-05T06:04:00Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 5 06:04:00.818549 containerd[1527]: time="2025-09-05T06:04:00.816770260Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 5 06:04:00.828067 containerd[1527]: time="2025-09-05T06:04:00.828025900Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.92µs" Sep 5 06:04:00.828067 containerd[1527]: time="2025-09-05T06:04:00.828058100Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 5 06:04:00.828140 containerd[1527]: time="2025-09-05T06:04:00.828075420Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 5 06:04:00.828233 containerd[1527]: time="2025-09-05T06:04:00.828212260Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 5 06:04:00.828257 containerd[1527]: time="2025-09-05T06:04:00.828233980Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 5 06:04:00.828289 containerd[1527]: time="2025-09-05T06:04:00.828258500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 06:04:00.828326 containerd[1527]: time="2025-09-05T06:04:00.828307900Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 5 06:04:00.828326 containerd[1527]: time="2025-09-05T06:04:00.828322500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 06:04:00.828649 containerd[1527]: time="2025-09-05T06:04:00.828591780Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 5 06:04:00.828649 containerd[1527]: time="2025-09-05T06:04:00.828627020Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 06:04:00.828649 containerd[1527]: time="2025-09-05T06:04:00.828639500Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 5 06:04:00.828649 containerd[1527]: time="2025-09-05T06:04:00.828646860Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 5 06:04:00.828754 containerd[1527]: time="2025-09-05T06:04:00.828721020Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 5 06:04:00.828955 containerd[1527]: time="2025-09-05T06:04:00.828932740Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 06:04:00.828998 containerd[1527]: time="2025-09-05T06:04:00.828979980Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 5 06:04:00.828998 containerd[1527]: time="2025-09-05T06:04:00.828993940Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 5 06:04:00.829048 containerd[1527]: time="2025-09-05T06:04:00.829037100Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 5 06:04:00.829340 containerd[1527]: time="2025-09-05T06:04:00.829322900Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 5 06:04:00.829432 containerd[1527]: time="2025-09-05T06:04:00.829415300Z" level=info msg="metadata content store policy set" policy=shared Sep 5 06:04:00.832417 containerd[1527]: time="2025-09-05T06:04:00.832363820Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 5 06:04:00.832463 containerd[1527]: time="2025-09-05T06:04:00.832431260Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 5 06:04:00.832463 containerd[1527]: time="2025-09-05T06:04:00.832446140Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 5 06:04:00.832463 containerd[1527]: time="2025-09-05T06:04:00.832459180Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 5 06:04:00.832519 containerd[1527]: time="2025-09-05T06:04:00.832473300Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 5 06:04:00.832519 containerd[1527]: time="2025-09-05T06:04:00.832483420Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 5 06:04:00.832519 containerd[1527]: time="2025-09-05T06:04:00.832494620Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 5 06:04:00.832519 containerd[1527]: time="2025-09-05T06:04:00.832505700Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 5 06:04:00.832585 containerd[1527]: time="2025-09-05T06:04:00.832549900Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 5 06:04:00.832585 containerd[1527]: time="2025-09-05T06:04:00.832567380Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 5 06:04:00.832585 containerd[1527]: time="2025-09-05T06:04:00.832576780Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 5 06:04:00.832630 containerd[1527]: time="2025-09-05T06:04:00.832588900Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 5 06:04:00.832726 containerd[1527]: time="2025-09-05T06:04:00.832691340Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 5 06:04:00.832726 containerd[1527]: time="2025-09-05T06:04:00.832720820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 5 06:04:00.832771 containerd[1527]: time="2025-09-05T06:04:00.832735620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 5 06:04:00.832771 containerd[1527]: time="2025-09-05T06:04:00.832746740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 5 06:04:00.832771 containerd[1527]: time="2025-09-05T06:04:00.832756820Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 5 06:04:00.832817 containerd[1527]: time="2025-09-05T06:04:00.832772740Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 5 06:04:00.832817 containerd[1527]: time="2025-09-05T06:04:00.832783900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 5 06:04:00.832817 containerd[1527]: time="2025-09-05T06:04:00.832795420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 5 06:04:00.832817 containerd[1527]: time="2025-09-05T06:04:00.832809140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 5 06:04:00.832885 containerd[1527]: time="2025-09-05T06:04:00.832819700Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 5 06:04:00.832885 containerd[1527]: time="2025-09-05T06:04:00.832830900Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 5 06:04:00.833032 containerd[1527]: time="2025-09-05T06:04:00.833004100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 5 06:04:00.833032 containerd[1527]: time="2025-09-05T06:04:00.833023940Z" level=info msg="Start snapshots syncer" Sep 5 06:04:00.833087 containerd[1527]: time="2025-09-05T06:04:00.833046740Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 5 06:04:00.833275 containerd[1527]: time="2025-09-05T06:04:00.833240100Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 5 06:04:00.833409 containerd[1527]: time="2025-09-05T06:04:00.833289740Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 5 06:04:00.833409 containerd[1527]: time="2025-09-05T06:04:00.833354500Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 5 06:04:00.833496 containerd[1527]: time="2025-09-05T06:04:00.833473540Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 5 06:04:00.833523 containerd[1527]: time="2025-09-05T06:04:00.833505620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 5 06:04:00.833523 containerd[1527]: time="2025-09-05T06:04:00.833517940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 5 06:04:00.833663 containerd[1527]: time="2025-09-05T06:04:00.833528460Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 5 06:04:00.833663 containerd[1527]: time="2025-09-05T06:04:00.833540500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 5 06:04:00.833663 containerd[1527]: time="2025-09-05T06:04:00.833550380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 5 06:04:00.833663 containerd[1527]: time="2025-09-05T06:04:00.833560420Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 5 06:04:00.833663 containerd[1527]: time="2025-09-05T06:04:00.833582980Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 5 06:04:00.833663 containerd[1527]: time="2025-09-05T06:04:00.833593940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 5 06:04:00.833663 containerd[1527]: time="2025-09-05T06:04:00.833608860Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 5 06:04:00.833663 containerd[1527]: time="2025-09-05T06:04:00.833648220Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 06:04:00.833663 containerd[1527]: time="2025-09-05T06:04:00.833662020Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 5 06:04:00.833815 containerd[1527]: time="2025-09-05T06:04:00.833670860Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 06:04:00.833815 containerd[1527]: time="2025-09-05T06:04:00.833680140Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 5 06:04:00.833815 containerd[1527]: time="2025-09-05T06:04:00.833688060Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 5 06:04:00.833815 containerd[1527]: time="2025-09-05T06:04:00.833697140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 5 06:04:00.833815 containerd[1527]: time="2025-09-05T06:04:00.833707420Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 5 06:04:00.833815 containerd[1527]: time="2025-09-05T06:04:00.833780980Z" level=info msg="runtime interface created" Sep 5 06:04:00.833815 containerd[1527]: time="2025-09-05T06:04:00.833785620Z" level=info msg="created NRI interface" Sep 5 06:04:00.833815 containerd[1527]: time="2025-09-05T06:04:00.833793420Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 5 06:04:00.833815 containerd[1527]: time="2025-09-05T06:04:00.833802980Z" level=info msg="Connect containerd service" Sep 5 06:04:00.833950 containerd[1527]: time="2025-09-05T06:04:00.833829340Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 5 06:04:00.834492 containerd[1527]: time="2025-09-05T06:04:00.834466700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901361420Z" level=info msg="Start subscribing containerd event" Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901465140Z" level=info msg="Start recovering state" Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901568500Z" level=info msg="Start event monitor" Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901606500Z" level=info msg="Start cni network conf syncer for default" Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901620580Z" level=info msg="Start streaming server" Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901629300Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901637020Z" level=info msg="runtime interface starting up..." Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901645860Z" level=info msg="starting plugins..." Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901653220Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901664820Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901705260Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 5 06:04:00.901865 containerd[1527]: time="2025-09-05T06:04:00.901861180Z" level=info msg="containerd successfully booted in 0.086013s" Sep 5 06:04:00.902253 systemd[1]: Started containerd.service - containerd container runtime. Sep 5 06:04:00.966499 sshd_keygen[1532]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 5 06:04:00.985871 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 5 06:04:00.990909 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 5 06:04:01.005964 systemd[1]: issuegen.service: Deactivated successfully. Sep 5 06:04:01.007434 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 5 06:04:01.009895 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 5 06:04:01.016234 tar[1517]: linux-arm64/README.md Sep 5 06:04:01.029145 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 5 06:04:01.032419 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 5 06:04:01.035257 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 5 06:04:01.037291 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 5 06:04:01.038622 systemd[1]: Reached target getty.target - Login Prompts. Sep 5 06:04:01.740553 systemd-networkd[1436]: eth0: Gained IPv6LL Sep 5 06:04:01.744467 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 5 06:04:01.745904 systemd[1]: Reached target network-online.target - Network is Online. Sep 5 06:04:01.748029 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 5 06:04:01.750057 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:01.751980 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 5 06:04:01.778611 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 5 06:04:01.780270 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 5 06:04:01.780554 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 5 06:04:01.782181 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 5 06:04:02.289103 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:02.290509 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 5 06:04:02.292196 systemd[1]: Startup finished in 1.970s (kernel) + 4.994s (initrd) + 3.281s (userspace) = 10.246s. Sep 5 06:04:02.292576 (kubelet)[1633]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 06:04:02.630090 kubelet[1633]: E0905 06:04:02.629967 1633 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 06:04:02.632610 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 06:04:02.632740 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 06:04:02.633052 systemd[1]: kubelet.service: Consumed 740ms CPU time, 257.5M memory peak. Sep 5 06:04:06.585631 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 5 06:04:06.586578 systemd[1]: Started sshd@0-10.0.0.144:22-10.0.0.1:43980.service - OpenSSH per-connection server daemon (10.0.0.1:43980). Sep 5 06:04:06.643431 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 43980 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:04:06.644749 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:06.650717 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 5 06:04:06.651578 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 5 06:04:06.656470 systemd-logind[1502]: New session 1 of user core. Sep 5 06:04:06.672780 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 5 06:04:06.675304 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 5 06:04:06.697606 (systemd)[1652]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 5 06:04:06.699885 systemd-logind[1502]: New session c1 of user core. Sep 5 06:04:06.816556 systemd[1652]: Queued start job for default target default.target. Sep 5 06:04:06.836502 systemd[1652]: Created slice app.slice - User Application Slice. Sep 5 06:04:06.836532 systemd[1652]: Reached target paths.target - Paths. Sep 5 06:04:06.836570 systemd[1652]: Reached target timers.target - Timers. Sep 5 06:04:06.837792 systemd[1652]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 5 06:04:06.847262 systemd[1652]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 5 06:04:06.847487 systemd[1652]: Reached target sockets.target - Sockets. Sep 5 06:04:06.847595 systemd[1652]: Reached target basic.target - Basic System. Sep 5 06:04:06.847714 systemd[1652]: Reached target default.target - Main User Target. Sep 5 06:04:06.847752 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 5 06:04:06.847850 systemd[1652]: Startup finished in 142ms. Sep 5 06:04:06.848858 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 5 06:04:06.926818 systemd[1]: Started sshd@1-10.0.0.144:22-10.0.0.1:43990.service - OpenSSH per-connection server daemon (10.0.0.1:43990). Sep 5 06:04:06.967672 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 43990 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:04:06.969257 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:06.973337 systemd-logind[1502]: New session 2 of user core. Sep 5 06:04:06.979562 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 5 06:04:07.030424 sshd[1666]: Connection closed by 10.0.0.1 port 43990 Sep 5 06:04:07.031023 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Sep 5 06:04:07.045490 systemd[1]: sshd@1-10.0.0.144:22-10.0.0.1:43990.service: Deactivated successfully. Sep 5 06:04:07.047618 systemd[1]: session-2.scope: Deactivated successfully. Sep 5 06:04:07.049440 systemd-logind[1502]: Session 2 logged out. Waiting for processes to exit. Sep 5 06:04:07.051660 systemd-logind[1502]: Removed session 2. Sep 5 06:04:07.052619 systemd[1]: Started sshd@2-10.0.0.144:22-10.0.0.1:43996.service - OpenSSH per-connection server daemon (10.0.0.1:43996). Sep 5 06:04:07.111783 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 43996 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:04:07.112833 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:07.119477 systemd-logind[1502]: New session 3 of user core. Sep 5 06:04:07.129788 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 5 06:04:07.182035 sshd[1675]: Connection closed by 10.0.0.1 port 43996 Sep 5 06:04:07.182317 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Sep 5 06:04:07.199609 systemd[1]: sshd@2-10.0.0.144:22-10.0.0.1:43996.service: Deactivated successfully. Sep 5 06:04:07.201780 systemd[1]: session-3.scope: Deactivated successfully. Sep 5 06:04:07.202572 systemd-logind[1502]: Session 3 logged out. Waiting for processes to exit. Sep 5 06:04:07.204994 systemd[1]: Started sshd@3-10.0.0.144:22-10.0.0.1:44004.service - OpenSSH per-connection server daemon (10.0.0.1:44004). Sep 5 06:04:07.207398 systemd-logind[1502]: Removed session 3. Sep 5 06:04:07.260466 sshd[1681]: Accepted publickey for core from 10.0.0.1 port 44004 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:04:07.261210 sshd-session[1681]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:07.265776 systemd-logind[1502]: New session 4 of user core. Sep 5 06:04:07.274540 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 5 06:04:07.329045 sshd[1684]: Connection closed by 10.0.0.1 port 44004 Sep 5 06:04:07.329609 sshd-session[1681]: pam_unix(sshd:session): session closed for user core Sep 5 06:04:07.340229 systemd[1]: sshd@3-10.0.0.144:22-10.0.0.1:44004.service: Deactivated successfully. Sep 5 06:04:07.342744 systemd[1]: session-4.scope: Deactivated successfully. Sep 5 06:04:07.343616 systemd-logind[1502]: Session 4 logged out. Waiting for processes to exit. Sep 5 06:04:07.346014 systemd[1]: Started sshd@4-10.0.0.144:22-10.0.0.1:44010.service - OpenSSH per-connection server daemon (10.0.0.1:44010). Sep 5 06:04:07.346574 systemd-logind[1502]: Removed session 4. Sep 5 06:04:07.407092 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 44010 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:04:07.409242 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:07.413992 systemd-logind[1502]: New session 5 of user core. Sep 5 06:04:07.425596 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 5 06:04:07.487695 sudo[1694]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 5 06:04:07.487960 sudo[1694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:04:07.507272 sudo[1694]: pam_unix(sudo:session): session closed for user root Sep 5 06:04:07.510183 sshd[1693]: Connection closed by 10.0.0.1 port 44010 Sep 5 06:04:07.509353 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Sep 5 06:04:07.519525 systemd[1]: sshd@4-10.0.0.144:22-10.0.0.1:44010.service: Deactivated successfully. Sep 5 06:04:07.523320 systemd[1]: session-5.scope: Deactivated successfully. Sep 5 06:04:07.526582 systemd-logind[1502]: Session 5 logged out. Waiting for processes to exit. Sep 5 06:04:07.529047 systemd[1]: Started sshd@5-10.0.0.144:22-10.0.0.1:44026.service - OpenSSH per-connection server daemon (10.0.0.1:44026). Sep 5 06:04:07.529983 systemd-logind[1502]: Removed session 5. Sep 5 06:04:07.583399 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 44026 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:04:07.584535 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:07.589011 systemd-logind[1502]: New session 6 of user core. Sep 5 06:04:07.597574 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 5 06:04:07.648998 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 5 06:04:07.649261 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:04:07.775288 sudo[1705]: pam_unix(sudo:session): session closed for user root Sep 5 06:04:07.780242 sudo[1704]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 5 06:04:07.780780 sudo[1704]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:04:07.789062 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 5 06:04:07.819409 augenrules[1727]: No rules Sep 5 06:04:07.820417 systemd[1]: audit-rules.service: Deactivated successfully. Sep 5 06:04:07.822476 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 5 06:04:07.823677 sudo[1704]: pam_unix(sudo:session): session closed for user root Sep 5 06:04:07.825016 sshd[1703]: Connection closed by 10.0.0.1 port 44026 Sep 5 06:04:07.825300 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Sep 5 06:04:07.837216 systemd[1]: sshd@5-10.0.0.144:22-10.0.0.1:44026.service: Deactivated successfully. Sep 5 06:04:07.838666 systemd[1]: session-6.scope: Deactivated successfully. Sep 5 06:04:07.841457 systemd-logind[1502]: Session 6 logged out. Waiting for processes to exit. Sep 5 06:04:07.842978 systemd[1]: Started sshd@6-10.0.0.144:22-10.0.0.1:44038.service - OpenSSH per-connection server daemon (10.0.0.1:44038). Sep 5 06:04:07.843970 systemd-logind[1502]: Removed session 6. Sep 5 06:04:07.898139 sshd[1736]: Accepted publickey for core from 10.0.0.1 port 44038 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:04:07.899277 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:04:07.903067 systemd-logind[1502]: New session 7 of user core. Sep 5 06:04:07.918579 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 5 06:04:07.968707 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 5 06:04:07.968963 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 5 06:04:08.239692 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 5 06:04:08.255697 (dockerd)[1761]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 5 06:04:08.451054 dockerd[1761]: time="2025-09-05T06:04:08.450996940Z" level=info msg="Starting up" Sep 5 06:04:08.453080 dockerd[1761]: time="2025-09-05T06:04:08.453033380Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 5 06:04:08.463004 dockerd[1761]: time="2025-09-05T06:04:08.462964660Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 5 06:04:08.475164 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2039155316-merged.mount: Deactivated successfully. Sep 5 06:04:08.498258 dockerd[1761]: time="2025-09-05T06:04:08.498150220Z" level=info msg="Loading containers: start." Sep 5 06:04:08.508412 kernel: Initializing XFRM netlink socket Sep 5 06:04:08.696144 systemd-networkd[1436]: docker0: Link UP Sep 5 06:04:08.699180 dockerd[1761]: time="2025-09-05T06:04:08.699142140Z" level=info msg="Loading containers: done." Sep 5 06:04:08.713081 dockerd[1761]: time="2025-09-05T06:04:08.713024260Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 5 06:04:08.713217 dockerd[1761]: time="2025-09-05T06:04:08.713115340Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 5 06:04:08.713217 dockerd[1761]: time="2025-09-05T06:04:08.713194660Z" level=info msg="Initializing buildkit" Sep 5 06:04:08.741054 dockerd[1761]: time="2025-09-05T06:04:08.741012700Z" level=info msg="Completed buildkit initialization" Sep 5 06:04:08.745766 dockerd[1761]: time="2025-09-05T06:04:08.745721860Z" level=info msg="Daemon has completed initialization" Sep 5 06:04:08.745957 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 5 06:04:08.746200 dockerd[1761]: time="2025-09-05T06:04:08.745878620Z" level=info msg="API listen on /run/docker.sock" Sep 5 06:04:09.277147 containerd[1527]: time="2025-09-05T06:04:09.276730820Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 5 06:04:09.880998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3656829112.mount: Deactivated successfully. Sep 5 06:04:10.823638 containerd[1527]: time="2025-09-05T06:04:10.823555220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:10.825046 containerd[1527]: time="2025-09-05T06:04:10.825016820Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 5 06:04:10.826025 containerd[1527]: time="2025-09-05T06:04:10.825603700Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:10.828916 containerd[1527]: time="2025-09-05T06:04:10.828886260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:10.829667 containerd[1527]: time="2025-09-05T06:04:10.829637900Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.55286616s" Sep 5 06:04:10.829718 containerd[1527]: time="2025-09-05T06:04:10.829672260Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 5 06:04:10.830777 containerd[1527]: time="2025-09-05T06:04:10.830754860Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 5 06:04:12.066742 containerd[1527]: time="2025-09-05T06:04:12.066693100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:12.067878 containerd[1527]: time="2025-09-05T06:04:12.067620780Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 5 06:04:12.068642 containerd[1527]: time="2025-09-05T06:04:12.068607740Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:12.071709 containerd[1527]: time="2025-09-05T06:04:12.071678340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:12.072487 containerd[1527]: time="2025-09-05T06:04:12.072450820Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.24166768s" Sep 5 06:04:12.072487 containerd[1527]: time="2025-09-05T06:04:12.072485780Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 5 06:04:12.072912 containerd[1527]: time="2025-09-05T06:04:12.072845380Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 5 06:04:12.659314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 5 06:04:12.660614 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:12.808376 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:12.811560 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 06:04:12.845836 kubelet[2044]: E0905 06:04:12.845784 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 06:04:12.848988 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 06:04:12.849118 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 06:04:12.850526 systemd[1]: kubelet.service: Consumed 138ms CPU time, 106.1M memory peak. Sep 5 06:04:13.805434 containerd[1527]: time="2025-09-05T06:04:13.805041100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:13.806114 containerd[1527]: time="2025-09-05T06:04:13.806072940Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 5 06:04:13.806883 containerd[1527]: time="2025-09-05T06:04:13.806821580Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:13.809409 containerd[1527]: time="2025-09-05T06:04:13.809140820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:13.810504 containerd[1527]: time="2025-09-05T06:04:13.810473900Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.73696416s" Sep 5 06:04:13.810545 containerd[1527]: time="2025-09-05T06:04:13.810508100Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 5 06:04:13.811298 containerd[1527]: time="2025-09-05T06:04:13.811216060Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 5 06:04:14.789682 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2538638398.mount: Deactivated successfully. Sep 5 06:04:15.047417 containerd[1527]: time="2025-09-05T06:04:15.047292300Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:15.048197 containerd[1527]: time="2025-09-05T06:04:15.048166660Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 5 06:04:15.049401 containerd[1527]: time="2025-09-05T06:04:15.049120940Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:15.051022 containerd[1527]: time="2025-09-05T06:04:15.050997740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:15.051461 containerd[1527]: time="2025-09-05T06:04:15.051431180Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.24013648s" Sep 5 06:04:15.051510 containerd[1527]: time="2025-09-05T06:04:15.051462980Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 5 06:04:15.052197 containerd[1527]: time="2025-09-05T06:04:15.052174980Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 5 06:04:15.728024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761448852.mount: Deactivated successfully. Sep 5 06:04:16.692534 containerd[1527]: time="2025-09-05T06:04:16.692478180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:16.693756 containerd[1527]: time="2025-09-05T06:04:16.693716540Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 5 06:04:16.694620 containerd[1527]: time="2025-09-05T06:04:16.694590140Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:16.697403 containerd[1527]: time="2025-09-05T06:04:16.696941900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:16.698092 containerd[1527]: time="2025-09-05T06:04:16.697936500Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.64566812s" Sep 5 06:04:16.698092 containerd[1527]: time="2025-09-05T06:04:16.697972940Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 5 06:04:16.698518 containerd[1527]: time="2025-09-05T06:04:16.698493100Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 5 06:04:17.108332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1481104333.mount: Deactivated successfully. Sep 5 06:04:17.113110 containerd[1527]: time="2025-09-05T06:04:17.113062580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:04:17.113547 containerd[1527]: time="2025-09-05T06:04:17.113519980Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 5 06:04:17.114387 containerd[1527]: time="2025-09-05T06:04:17.114354340Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:04:17.116131 containerd[1527]: time="2025-09-05T06:04:17.116105380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 5 06:04:17.116726 containerd[1527]: time="2025-09-05T06:04:17.116704780Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 418.17928ms" Sep 5 06:04:17.116793 containerd[1527]: time="2025-09-05T06:04:17.116731940Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 5 06:04:17.117404 containerd[1527]: time="2025-09-05T06:04:17.117355340Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 5 06:04:17.544259 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4169726398.mount: Deactivated successfully. Sep 5 06:04:19.381941 containerd[1527]: time="2025-09-05T06:04:19.381896780Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:19.383210 containerd[1527]: time="2025-09-05T06:04:19.383181980Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 5 06:04:19.385401 containerd[1527]: time="2025-09-05T06:04:19.384469500Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:19.386717 containerd[1527]: time="2025-09-05T06:04:19.386682700Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:19.387941 containerd[1527]: time="2025-09-05T06:04:19.387911020Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.27048908s" Sep 5 06:04:19.387990 containerd[1527]: time="2025-09-05T06:04:19.387942020Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 5 06:04:22.909505 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 5 06:04:22.910853 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:23.071258 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:23.074813 (kubelet)[2208]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 5 06:04:23.105139 kubelet[2208]: E0905 06:04:23.105092 2208 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 5 06:04:23.107699 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 5 06:04:23.107815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 5 06:04:23.108082 systemd[1]: kubelet.service: Consumed 121ms CPU time, 107.3M memory peak. Sep 5 06:04:24.835559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:24.835691 systemd[1]: kubelet.service: Consumed 121ms CPU time, 107.3M memory peak. Sep 5 06:04:24.837466 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:24.856112 systemd[1]: Reload requested from client PID 2223 ('systemctl') (unit session-7.scope)... Sep 5 06:04:24.856129 systemd[1]: Reloading... Sep 5 06:04:24.933433 zram_generator::config[2270]: No configuration found. Sep 5 06:04:25.243246 systemd[1]: Reloading finished in 386 ms. Sep 5 06:04:25.315899 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 5 06:04:25.315967 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 5 06:04:25.316218 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:25.316280 systemd[1]: kubelet.service: Consumed 93ms CPU time, 95.1M memory peak. Sep 5 06:04:25.317533 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:25.418890 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:25.422218 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 06:04:25.451348 kubelet[2312]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:04:25.451348 kubelet[2312]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 06:04:25.451348 kubelet[2312]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:04:25.451661 kubelet[2312]: I0905 06:04:25.451376 2312 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 06:04:26.106672 kubelet[2312]: I0905 06:04:26.106627 2312 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 06:04:26.106672 kubelet[2312]: I0905 06:04:26.106658 2312 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 06:04:26.106918 kubelet[2312]: I0905 06:04:26.106890 2312 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 06:04:26.123607 kubelet[2312]: E0905 06:04:26.123576 2312 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.144:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 5 06:04:26.125193 kubelet[2312]: I0905 06:04:26.125156 2312 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 06:04:26.131497 kubelet[2312]: I0905 06:04:26.131477 2312 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 06:04:26.133945 kubelet[2312]: I0905 06:04:26.133930 2312 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 06:04:26.134943 kubelet[2312]: I0905 06:04:26.134894 2312 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 06:04:26.135077 kubelet[2312]: I0905 06:04:26.134945 2312 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 06:04:26.135160 kubelet[2312]: I0905 06:04:26.135142 2312 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 06:04:26.135160 kubelet[2312]: I0905 06:04:26.135150 2312 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 06:04:26.135944 kubelet[2312]: I0905 06:04:26.135908 2312 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:04:26.138822 kubelet[2312]: I0905 06:04:26.138804 2312 kubelet.go:480] "Attempting to sync node with API server" Sep 5 06:04:26.138854 kubelet[2312]: I0905 06:04:26.138825 2312 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 06:04:26.138854 kubelet[2312]: I0905 06:04:26.138849 2312 kubelet.go:386] "Adding apiserver pod source" Sep 5 06:04:26.138903 kubelet[2312]: I0905 06:04:26.138862 2312 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 06:04:26.144137 kubelet[2312]: I0905 06:04:26.144113 2312 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 06:04:26.146331 kubelet[2312]: E0905 06:04:26.144576 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 06:04:26.146331 kubelet[2312]: I0905 06:04:26.145218 2312 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 06:04:26.146331 kubelet[2312]: W0905 06:04:26.145358 2312 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 5 06:04:26.146331 kubelet[2312]: E0905 06:04:26.145415 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.144:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 5 06:04:26.148007 kubelet[2312]: I0905 06:04:26.147974 2312 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 06:04:26.148058 kubelet[2312]: I0905 06:04:26.148015 2312 server.go:1289] "Started kubelet" Sep 5 06:04:26.148151 kubelet[2312]: I0905 06:04:26.148119 2312 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 06:04:26.149510 kubelet[2312]: I0905 06:04:26.149492 2312 server.go:317] "Adding debug handlers to kubelet server" Sep 5 06:04:26.149681 kubelet[2312]: I0905 06:04:26.149652 2312 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 06:04:26.150858 kubelet[2312]: I0905 06:04:26.150835 2312 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 06:04:26.152684 kubelet[2312]: I0905 06:04:26.152621 2312 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 06:04:26.154727 kubelet[2312]: I0905 06:04:26.154709 2312 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 06:04:26.154839 kubelet[2312]: I0905 06:04:26.153511 2312 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 06:04:26.154968 kubelet[2312]: E0905 06:04:26.150129 2312 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.144:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.144:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18624dba0f8bcc0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 06:04:26.14799054 +0000 UTC m=+0.722794881,LastTimestamp:2025-09-05 06:04:26.14799054 +0000 UTC m=+0.722794881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 06:04:26.155049 kubelet[2312]: E0905 06:04:26.153658 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:26.155095 kubelet[2312]: E0905 06:04:26.153567 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="200ms" Sep 5 06:04:26.155151 kubelet[2312]: E0905 06:04:26.154049 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.144:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 5 06:04:26.155196 kubelet[2312]: I0905 06:04:26.154315 2312 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 06:04:26.155413 kubelet[2312]: I0905 06:04:26.153521 2312 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 06:04:26.155818 kubelet[2312]: I0905 06:04:26.155546 2312 reconciler.go:26] "Reconciler: start to sync state" Sep 5 06:04:26.156377 kubelet[2312]: I0905 06:04:26.156357 2312 factory.go:223] Registration of the containerd container factory successfully Sep 5 06:04:26.156475 kubelet[2312]: I0905 06:04:26.156464 2312 factory.go:223] Registration of the systemd container factory successfully Sep 5 06:04:26.157410 kubelet[2312]: E0905 06:04:26.157341 2312 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 5 06:04:26.164836 kubelet[2312]: I0905 06:04:26.164818 2312 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 06:04:26.164959 kubelet[2312]: I0905 06:04:26.164922 2312 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 06:04:26.165040 kubelet[2312]: I0905 06:04:26.165031 2312 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:04:26.166873 kubelet[2312]: I0905 06:04:26.166851 2312 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 06:04:26.168417 kubelet[2312]: I0905 06:04:26.168065 2312 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 06:04:26.168417 kubelet[2312]: I0905 06:04:26.168084 2312 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 06:04:26.168417 kubelet[2312]: I0905 06:04:26.168100 2312 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 06:04:26.168417 kubelet[2312]: I0905 06:04:26.168106 2312 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 06:04:26.168417 kubelet[2312]: E0905 06:04:26.168138 2312 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 06:04:26.170820 kubelet[2312]: E0905 06:04:26.170681 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.144:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 5 06:04:26.255488 kubelet[2312]: E0905 06:04:26.255444 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:26.257879 kubelet[2312]: I0905 06:04:26.257851 2312 policy_none.go:49] "None policy: Start" Sep 5 06:04:26.257879 kubelet[2312]: I0905 06:04:26.257880 2312 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 06:04:26.257966 kubelet[2312]: I0905 06:04:26.257894 2312 state_mem.go:35] "Initializing new in-memory state store" Sep 5 06:04:26.262782 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 5 06:04:26.268396 kubelet[2312]: E0905 06:04:26.268345 2312 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 5 06:04:26.274038 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 5 06:04:26.276758 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 5 06:04:26.295397 kubelet[2312]: E0905 06:04:26.295213 2312 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 06:04:26.295881 kubelet[2312]: I0905 06:04:26.295856 2312 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 06:04:26.295925 kubelet[2312]: I0905 06:04:26.295877 2312 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 06:04:26.296290 kubelet[2312]: I0905 06:04:26.296155 2312 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 06:04:26.297340 kubelet[2312]: E0905 06:04:26.297315 2312 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 06:04:26.297455 kubelet[2312]: E0905 06:04:26.297355 2312 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 5 06:04:26.356007 kubelet[2312]: E0905 06:04:26.355966 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="400ms" Sep 5 06:04:26.397311 kubelet[2312]: I0905 06:04:26.397179 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:04:26.397672 kubelet[2312]: E0905 06:04:26.397621 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 5 06:04:26.478599 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 5 06:04:26.497658 kubelet[2312]: E0905 06:04:26.497619 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:04:26.500949 systemd[1]: Created slice kubepods-burstable-podc31566f5e156ce257f550d131f53649e.slice - libcontainer container kubepods-burstable-podc31566f5e156ce257f550d131f53649e.slice. Sep 5 06:04:26.503217 kubelet[2312]: E0905 06:04:26.502741 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:04:26.504020 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 5 06:04:26.505377 kubelet[2312]: E0905 06:04:26.505350 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:04:26.557807 kubelet[2312]: I0905 06:04:26.557752 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c31566f5e156ce257f550d131f53649e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c31566f5e156ce257f550d131f53649e\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:04:26.557807 kubelet[2312]: I0905 06:04:26.557790 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:26.557807 kubelet[2312]: I0905 06:04:26.557808 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:26.557954 kubelet[2312]: I0905 06:04:26.557825 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 06:04:26.557954 kubelet[2312]: I0905 06:04:26.557839 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c31566f5e156ce257f550d131f53649e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c31566f5e156ce257f550d131f53649e\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:04:26.557954 kubelet[2312]: I0905 06:04:26.557853 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c31566f5e156ce257f550d131f53649e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c31566f5e156ce257f550d131f53649e\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:04:26.557954 kubelet[2312]: I0905 06:04:26.557867 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:26.557954 kubelet[2312]: I0905 06:04:26.557881 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:26.558048 kubelet[2312]: I0905 06:04:26.557894 2312 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:26.599049 kubelet[2312]: I0905 06:04:26.599013 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:04:26.599437 kubelet[2312]: E0905 06:04:26.599361 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 5 06:04:26.756495 kubelet[2312]: E0905 06:04:26.756442 2312 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.144:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.144:6443: connect: connection refused" interval="800ms" Sep 5 06:04:26.799230 containerd[1527]: time="2025-09-05T06:04:26.799190580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 5 06:04:26.803971 containerd[1527]: time="2025-09-05T06:04:26.803852020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c31566f5e156ce257f550d131f53649e,Namespace:kube-system,Attempt:0,}" Sep 5 06:04:26.806870 containerd[1527]: time="2025-09-05T06:04:26.806785340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 5 06:04:26.822900 containerd[1527]: time="2025-09-05T06:04:26.822867180Z" level=info msg="connecting to shim beceecc0aaa437526f058948b16ec4ca9f1e9a818bb739834b142baa161b072d" address="unix:///run/containerd/s/eb2235bd26bf88beb6bd31ee403efe9f5ee2352ac38e086697f78664464d0591" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:04:26.841129 containerd[1527]: time="2025-09-05T06:04:26.840863860Z" level=info msg="connecting to shim ac94024d6fe714423cf033353700b13531bcfcf36e2ec860ccfcbf2123654dde" address="unix:///run/containerd/s/aec1507e7e3faf99e3bcb41f0420096f3c433217397cab61a7184d3a7f6d6dd5" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:04:26.842662 containerd[1527]: time="2025-09-05T06:04:26.842632300Z" level=info msg="connecting to shim fd5f84fe32fd6266ba52546f77628225e47dc24abc7c1ba2ae0e012dc5ef70fc" address="unix:///run/containerd/s/eb6bb94e29c05dfe0963c234f6cfd83baf4137564fc20b6f98378fb22c10c55d" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:04:26.847563 systemd[1]: Started cri-containerd-beceecc0aaa437526f058948b16ec4ca9f1e9a818bb739834b142baa161b072d.scope - libcontainer container beceecc0aaa437526f058948b16ec4ca9f1e9a818bb739834b142baa161b072d. Sep 5 06:04:26.873606 systemd[1]: Started cri-containerd-ac94024d6fe714423cf033353700b13531bcfcf36e2ec860ccfcbf2123654dde.scope - libcontainer container ac94024d6fe714423cf033353700b13531bcfcf36e2ec860ccfcbf2123654dde. Sep 5 06:04:26.875188 systemd[1]: Started cri-containerd-fd5f84fe32fd6266ba52546f77628225e47dc24abc7c1ba2ae0e012dc5ef70fc.scope - libcontainer container fd5f84fe32fd6266ba52546f77628225e47dc24abc7c1ba2ae0e012dc5ef70fc. Sep 5 06:04:26.889183 containerd[1527]: time="2025-09-05T06:04:26.889148580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"beceecc0aaa437526f058948b16ec4ca9f1e9a818bb739834b142baa161b072d\"" Sep 5 06:04:26.895128 containerd[1527]: time="2025-09-05T06:04:26.895076780Z" level=info msg="CreateContainer within sandbox \"beceecc0aaa437526f058948b16ec4ca9f1e9a818bb739834b142baa161b072d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 5 06:04:26.904136 containerd[1527]: time="2025-09-05T06:04:26.904100300Z" level=info msg="Container 938089b607624eb3dcdbd9f787c1a0781fbe0d25e42c3d28a45eaaa08928bbc4: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:04:26.912200 containerd[1527]: time="2025-09-05T06:04:26.912165380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac94024d6fe714423cf033353700b13531bcfcf36e2ec860ccfcbf2123654dde\"" Sep 5 06:04:26.913681 containerd[1527]: time="2025-09-05T06:04:26.913613660Z" level=info msg="CreateContainer within sandbox \"beceecc0aaa437526f058948b16ec4ca9f1e9a818bb739834b142baa161b072d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"938089b607624eb3dcdbd9f787c1a0781fbe0d25e42c3d28a45eaaa08928bbc4\"" Sep 5 06:04:26.914166 containerd[1527]: time="2025-09-05T06:04:26.914123260Z" level=info msg="StartContainer for \"938089b607624eb3dcdbd9f787c1a0781fbe0d25e42c3d28a45eaaa08928bbc4\"" Sep 5 06:04:26.915893 containerd[1527]: time="2025-09-05T06:04:26.915866940Z" level=info msg="connecting to shim 938089b607624eb3dcdbd9f787c1a0781fbe0d25e42c3d28a45eaaa08928bbc4" address="unix:///run/containerd/s/eb2235bd26bf88beb6bd31ee403efe9f5ee2352ac38e086697f78664464d0591" protocol=ttrpc version=3 Sep 5 06:04:26.916397 containerd[1527]: time="2025-09-05T06:04:26.916365060Z" level=info msg="CreateContainer within sandbox \"ac94024d6fe714423cf033353700b13531bcfcf36e2ec860ccfcbf2123654dde\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 5 06:04:26.920592 containerd[1527]: time="2025-09-05T06:04:26.920098620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c31566f5e156ce257f550d131f53649e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd5f84fe32fd6266ba52546f77628225e47dc24abc7c1ba2ae0e012dc5ef70fc\"" Sep 5 06:04:26.922969 containerd[1527]: time="2025-09-05T06:04:26.922939500Z" level=info msg="CreateContainer within sandbox \"fd5f84fe32fd6266ba52546f77628225e47dc24abc7c1ba2ae0e012dc5ef70fc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 5 06:04:26.924349 containerd[1527]: time="2025-09-05T06:04:26.924324260Z" level=info msg="Container b4e219bb090a7a3c3598af37758cd2642961276481cb3bfd175bfd5bc3a159b1: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:04:26.929013 containerd[1527]: time="2025-09-05T06:04:26.928985820Z" level=info msg="Container 5cb1309b6adf9b878e2c3d3221b4bf4434b07adbcd8b3711b697733f901574aa: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:04:26.938548 systemd[1]: Started cri-containerd-938089b607624eb3dcdbd9f787c1a0781fbe0d25e42c3d28a45eaaa08928bbc4.scope - libcontainer container 938089b607624eb3dcdbd9f787c1a0781fbe0d25e42c3d28a45eaaa08928bbc4. Sep 5 06:04:26.943575 containerd[1527]: time="2025-09-05T06:04:26.943533100Z" level=info msg="CreateContainer within sandbox \"fd5f84fe32fd6266ba52546f77628225e47dc24abc7c1ba2ae0e012dc5ef70fc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5cb1309b6adf9b878e2c3d3221b4bf4434b07adbcd8b3711b697733f901574aa\"" Sep 5 06:04:26.944065 containerd[1527]: time="2025-09-05T06:04:26.944021620Z" level=info msg="StartContainer for \"5cb1309b6adf9b878e2c3d3221b4bf4434b07adbcd8b3711b697733f901574aa\"" Sep 5 06:04:26.945081 containerd[1527]: time="2025-09-05T06:04:26.944315740Z" level=info msg="CreateContainer within sandbox \"ac94024d6fe714423cf033353700b13531bcfcf36e2ec860ccfcbf2123654dde\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b4e219bb090a7a3c3598af37758cd2642961276481cb3bfd175bfd5bc3a159b1\"" Sep 5 06:04:26.945333 containerd[1527]: time="2025-09-05T06:04:26.945297180Z" level=info msg="StartContainer for \"b4e219bb090a7a3c3598af37758cd2642961276481cb3bfd175bfd5bc3a159b1\"" Sep 5 06:04:26.945643 containerd[1527]: time="2025-09-05T06:04:26.945608540Z" level=info msg="connecting to shim 5cb1309b6adf9b878e2c3d3221b4bf4434b07adbcd8b3711b697733f901574aa" address="unix:///run/containerd/s/eb6bb94e29c05dfe0963c234f6cfd83baf4137564fc20b6f98378fb22c10c55d" protocol=ttrpc version=3 Sep 5 06:04:26.946440 containerd[1527]: time="2025-09-05T06:04:26.946412580Z" level=info msg="connecting to shim b4e219bb090a7a3c3598af37758cd2642961276481cb3bfd175bfd5bc3a159b1" address="unix:///run/containerd/s/aec1507e7e3faf99e3bcb41f0420096f3c433217397cab61a7184d3a7f6d6dd5" protocol=ttrpc version=3 Sep 5 06:04:26.958929 kubelet[2312]: E0905 06:04:26.958895 2312 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.144:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.144:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 5 06:04:26.966540 systemd[1]: Started cri-containerd-5cb1309b6adf9b878e2c3d3221b4bf4434b07adbcd8b3711b697733f901574aa.scope - libcontainer container 5cb1309b6adf9b878e2c3d3221b4bf4434b07adbcd8b3711b697733f901574aa. Sep 5 06:04:26.970534 systemd[1]: Started cri-containerd-b4e219bb090a7a3c3598af37758cd2642961276481cb3bfd175bfd5bc3a159b1.scope - libcontainer container b4e219bb090a7a3c3598af37758cd2642961276481cb3bfd175bfd5bc3a159b1. Sep 5 06:04:26.987706 containerd[1527]: time="2025-09-05T06:04:26.987673380Z" level=info msg="StartContainer for \"938089b607624eb3dcdbd9f787c1a0781fbe0d25e42c3d28a45eaaa08928bbc4\" returns successfully" Sep 5 06:04:27.001756 kubelet[2312]: I0905 06:04:27.001485 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:04:27.001846 kubelet[2312]: E0905 06:04:27.001804 2312 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.144:6443/api/v1/nodes\": dial tcp 10.0.0.144:6443: connect: connection refused" node="localhost" Sep 5 06:04:27.018152 containerd[1527]: time="2025-09-05T06:04:27.017973940Z" level=info msg="StartContainer for \"5cb1309b6adf9b878e2c3d3221b4bf4434b07adbcd8b3711b697733f901574aa\" returns successfully" Sep 5 06:04:27.018505 containerd[1527]: time="2025-09-05T06:04:27.018431460Z" level=info msg="StartContainer for \"b4e219bb090a7a3c3598af37758cd2642961276481cb3bfd175bfd5bc3a159b1\" returns successfully" Sep 5 06:04:27.175731 kubelet[2312]: E0905 06:04:27.175702 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:04:27.177759 kubelet[2312]: E0905 06:04:27.177731 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:04:27.182143 kubelet[2312]: E0905 06:04:27.182124 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:04:27.804743 kubelet[2312]: I0905 06:04:27.804473 2312 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:04:28.181572 kubelet[2312]: E0905 06:04:28.181541 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:04:28.182640 kubelet[2312]: E0905 06:04:28.182622 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:04:28.985089 kubelet[2312]: E0905 06:04:28.985051 2312 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 5 06:04:29.042440 kubelet[2312]: E0905 06:04:29.042335 2312 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18624dba0f8bcc0c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 06:04:26.14799054 +0000 UTC m=+0.722794881,LastTimestamp:2025-09-05 06:04:26.14799054 +0000 UTC m=+0.722794881,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 06:04:29.095584 kubelet[2312]: E0905 06:04:29.095490 2312 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18624dba101a4784 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-05 06:04:26.15732826 +0000 UTC m=+0.732132641,LastTimestamp:2025-09-05 06:04:26.15732826 +0000 UTC m=+0.732132641,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 5 06:04:29.122125 kubelet[2312]: I0905 06:04:29.122089 2312 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 06:04:29.122184 kubelet[2312]: E0905 06:04:29.122128 2312 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 5 06:04:29.130619 kubelet[2312]: E0905 06:04:29.130591 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:29.183346 kubelet[2312]: E0905 06:04:29.183318 2312 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 5 06:04:29.231477 kubelet[2312]: E0905 06:04:29.231438 2312 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:29.354201 kubelet[2312]: I0905 06:04:29.354038 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:29.358601 kubelet[2312]: E0905 06:04:29.358574 2312 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:29.358601 kubelet[2312]: I0905 06:04:29.358598 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 06:04:29.360171 kubelet[2312]: E0905 06:04:29.360138 2312 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 5 06:04:29.360171 kubelet[2312]: I0905 06:04:29.360162 2312 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 06:04:29.361582 kubelet[2312]: E0905 06:04:29.361552 2312 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 5 06:04:30.143259 kubelet[2312]: I0905 06:04:30.143196 2312 apiserver.go:52] "Watching apiserver" Sep 5 06:04:30.156549 kubelet[2312]: I0905 06:04:30.156509 2312 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 06:04:30.859737 systemd[1]: Reload requested from client PID 2598 ('systemctl') (unit session-7.scope)... Sep 5 06:04:30.859751 systemd[1]: Reloading... Sep 5 06:04:30.934415 zram_generator::config[2644]: No configuration found. Sep 5 06:04:31.097830 systemd[1]: Reloading finished in 237 ms. Sep 5 06:04:31.125634 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:31.138078 systemd[1]: kubelet.service: Deactivated successfully. Sep 5 06:04:31.138322 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:31.138396 systemd[1]: kubelet.service: Consumed 1.077s CPU time, 127.6M memory peak. Sep 5 06:04:31.140113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 5 06:04:31.266705 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 5 06:04:31.270519 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 5 06:04:31.305878 kubelet[2683]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:04:31.305878 kubelet[2683]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 5 06:04:31.305878 kubelet[2683]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 5 06:04:31.306215 kubelet[2683]: I0905 06:04:31.305914 2683 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 5 06:04:31.312476 kubelet[2683]: I0905 06:04:31.312437 2683 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 5 06:04:31.312476 kubelet[2683]: I0905 06:04:31.312468 2683 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 5 06:04:31.312674 kubelet[2683]: I0905 06:04:31.312658 2683 server.go:956] "Client rotation is on, will bootstrap in background" Sep 5 06:04:31.313945 kubelet[2683]: I0905 06:04:31.313925 2683 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 5 06:04:31.316558 kubelet[2683]: I0905 06:04:31.316328 2683 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 5 06:04:31.320463 kubelet[2683]: I0905 06:04:31.320444 2683 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 5 06:04:31.324653 kubelet[2683]: I0905 06:04:31.323769 2683 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 5 06:04:31.324653 kubelet[2683]: I0905 06:04:31.323962 2683 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 5 06:04:31.324653 kubelet[2683]: I0905 06:04:31.323981 2683 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 5 06:04:31.324653 kubelet[2683]: I0905 06:04:31.324198 2683 topology_manager.go:138] "Creating topology manager with none policy" Sep 5 06:04:31.325127 kubelet[2683]: I0905 06:04:31.324211 2683 container_manager_linux.go:303] "Creating device plugin manager" Sep 5 06:04:31.325127 kubelet[2683]: I0905 06:04:31.324264 2683 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:04:31.325127 kubelet[2683]: I0905 06:04:31.324457 2683 kubelet.go:480] "Attempting to sync node with API server" Sep 5 06:04:31.325127 kubelet[2683]: I0905 06:04:31.324474 2683 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 5 06:04:31.325127 kubelet[2683]: I0905 06:04:31.324502 2683 kubelet.go:386] "Adding apiserver pod source" Sep 5 06:04:31.325127 kubelet[2683]: I0905 06:04:31.324595 2683 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 5 06:04:31.326053 kubelet[2683]: I0905 06:04:31.325992 2683 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 5 06:04:31.326619 kubelet[2683]: I0905 06:04:31.326577 2683 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 5 06:04:31.328633 kubelet[2683]: I0905 06:04:31.328610 2683 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 5 06:04:31.328692 kubelet[2683]: I0905 06:04:31.328650 2683 server.go:1289] "Started kubelet" Sep 5 06:04:31.329111 kubelet[2683]: I0905 06:04:31.329079 2683 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 5 06:04:31.330646 kubelet[2683]: I0905 06:04:31.330588 2683 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 5 06:04:31.330886 kubelet[2683]: I0905 06:04:31.330863 2683 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 5 06:04:31.332242 kubelet[2683]: I0905 06:04:31.332205 2683 server.go:317] "Adding debug handlers to kubelet server" Sep 5 06:04:31.333536 kubelet[2683]: I0905 06:04:31.333497 2683 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 5 06:04:31.334553 kubelet[2683]: I0905 06:04:31.334519 2683 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 5 06:04:31.335693 kubelet[2683]: I0905 06:04:31.335659 2683 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 5 06:04:31.335883 kubelet[2683]: E0905 06:04:31.335861 2683 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 5 06:04:31.336809 kubelet[2683]: I0905 06:04:31.336779 2683 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 5 06:04:31.336974 kubelet[2683]: I0905 06:04:31.336946 2683 reconciler.go:26] "Reconciler: start to sync state" Sep 5 06:04:31.338388 kubelet[2683]: I0905 06:04:31.337490 2683 factory.go:223] Registration of the systemd container factory successfully Sep 5 06:04:31.338388 kubelet[2683]: I0905 06:04:31.337588 2683 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 5 06:04:31.343577 kubelet[2683]: I0905 06:04:31.343538 2683 factory.go:223] Registration of the containerd container factory successfully Sep 5 06:04:31.366762 kubelet[2683]: I0905 06:04:31.366509 2683 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 5 06:04:31.368633 kubelet[2683]: I0905 06:04:31.368605 2683 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 5 06:04:31.369853 kubelet[2683]: I0905 06:04:31.369835 2683 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 5 06:04:31.369933 kubelet[2683]: I0905 06:04:31.369859 2683 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 5 06:04:31.369933 kubelet[2683]: I0905 06:04:31.369867 2683 kubelet.go:2436] "Starting kubelet main sync loop" Sep 5 06:04:31.370481 kubelet[2683]: E0905 06:04:31.370456 2683 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 5 06:04:31.386531 kubelet[2683]: I0905 06:04:31.385525 2683 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 5 06:04:31.386531 kubelet[2683]: I0905 06:04:31.385546 2683 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 5 06:04:31.386531 kubelet[2683]: I0905 06:04:31.385580 2683 state_mem.go:36] "Initialized new in-memory state store" Sep 5 06:04:31.386531 kubelet[2683]: I0905 06:04:31.385714 2683 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 5 06:04:31.386531 kubelet[2683]: I0905 06:04:31.385724 2683 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 5 06:04:31.386531 kubelet[2683]: I0905 06:04:31.385740 2683 policy_none.go:49] "None policy: Start" Sep 5 06:04:31.386531 kubelet[2683]: I0905 06:04:31.385750 2683 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 5 06:04:31.386531 kubelet[2683]: I0905 06:04:31.385759 2683 state_mem.go:35] "Initializing new in-memory state store" Sep 5 06:04:31.386531 kubelet[2683]: I0905 06:04:31.385838 2683 state_mem.go:75] "Updated machine memory state" Sep 5 06:04:31.389724 kubelet[2683]: E0905 06:04:31.389700 2683 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 5 06:04:31.389914 kubelet[2683]: I0905 06:04:31.389865 2683 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 5 06:04:31.389914 kubelet[2683]: I0905 06:04:31.389880 2683 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 5 06:04:31.390069 kubelet[2683]: I0905 06:04:31.390049 2683 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 5 06:04:31.391409 kubelet[2683]: E0905 06:04:31.391129 2683 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 5 06:04:31.471565 kubelet[2683]: I0905 06:04:31.471532 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 06:04:31.471780 kubelet[2683]: I0905 06:04:31.471731 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:31.472106 kubelet[2683]: I0905 06:04:31.471636 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 06:04:31.492451 kubelet[2683]: I0905 06:04:31.492426 2683 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 5 06:04:31.499297 kubelet[2683]: I0905 06:04:31.499256 2683 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 5 06:04:31.499378 kubelet[2683]: I0905 06:04:31.499336 2683 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 5 06:04:31.538179 kubelet[2683]: I0905 06:04:31.538131 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:31.538179 kubelet[2683]: I0905 06:04:31.538168 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:31.538312 kubelet[2683]: I0905 06:04:31.538191 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:31.538312 kubelet[2683]: I0905 06:04:31.538214 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:31.538312 kubelet[2683]: I0905 06:04:31.538238 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 5 06:04:31.538312 kubelet[2683]: I0905 06:04:31.538267 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c31566f5e156ce257f550d131f53649e-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c31566f5e156ce257f550d131f53649e\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:04:31.538312 kubelet[2683]: I0905 06:04:31.538283 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c31566f5e156ce257f550d131f53649e-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c31566f5e156ce257f550d131f53649e\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:04:31.538434 kubelet[2683]: I0905 06:04:31.538298 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c31566f5e156ce257f550d131f53649e-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c31566f5e156ce257f550d131f53649e\") " pod="kube-system/kube-apiserver-localhost" Sep 5 06:04:31.538434 kubelet[2683]: I0905 06:04:31.538314 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:32.325927 kubelet[2683]: I0905 06:04:32.325893 2683 apiserver.go:52] "Watching apiserver" Sep 5 06:04:32.337405 kubelet[2683]: I0905 06:04:32.336884 2683 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 5 06:04:32.359834 kubelet[2683]: I0905 06:04:32.359776 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.3597601799999999 podStartE2EDuration="1.35976018s" podCreationTimestamp="2025-09-05 06:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:04:32.35967274 +0000 UTC m=+1.086058401" watchObservedRunningTime="2025-09-05 06:04:32.35976018 +0000 UTC m=+1.086145841" Sep 5 06:04:32.359970 kubelet[2683]: I0905 06:04:32.359893 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.35988778 podStartE2EDuration="1.35988778s" podCreationTimestamp="2025-09-05 06:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:04:32.35319834 +0000 UTC m=+1.079584001" watchObservedRunningTime="2025-09-05 06:04:32.35988778 +0000 UTC m=+1.086273441" Sep 5 06:04:32.368796 kubelet[2683]: I0905 06:04:32.368745 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.3687317399999999 podStartE2EDuration="1.36873174s" podCreationTimestamp="2025-09-05 06:04:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:04:32.36857346 +0000 UTC m=+1.094959121" watchObservedRunningTime="2025-09-05 06:04:32.36873174 +0000 UTC m=+1.095117441" Sep 5 06:04:32.381869 kubelet[2683]: I0905 06:04:32.381843 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 5 06:04:32.381980 kubelet[2683]: I0905 06:04:32.381916 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 5 06:04:32.382103 kubelet[2683]: I0905 06:04:32.382067 2683 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:32.387088 kubelet[2683]: E0905 06:04:32.387059 2683 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 5 06:04:32.389668 kubelet[2683]: E0905 06:04:32.389635 2683 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 5 06:04:32.390813 kubelet[2683]: E0905 06:04:32.390787 2683 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 5 06:04:37.027930 kubelet[2683]: I0905 06:04:37.027893 2683 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 5 06:04:37.030041 containerd[1527]: time="2025-09-05T06:04:37.030002575Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 5 06:04:37.030320 kubelet[2683]: I0905 06:04:37.030202 2683 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 5 06:04:37.842322 systemd[1]: Created slice kubepods-besteffort-pod5adb5536_34cb_4e56_969c_b6939475e000.slice - libcontainer container kubepods-besteffort-pod5adb5536_34cb_4e56_969c_b6939475e000.slice. Sep 5 06:04:37.879856 kubelet[2683]: I0905 06:04:37.879791 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5adb5536-34cb-4e56-969c-b6939475e000-kube-proxy\") pod \"kube-proxy-rs2w2\" (UID: \"5adb5536-34cb-4e56-969c-b6939475e000\") " pod="kube-system/kube-proxy-rs2w2" Sep 5 06:04:37.879975 kubelet[2683]: I0905 06:04:37.879869 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5adb5536-34cb-4e56-969c-b6939475e000-xtables-lock\") pod \"kube-proxy-rs2w2\" (UID: \"5adb5536-34cb-4e56-969c-b6939475e000\") " pod="kube-system/kube-proxy-rs2w2" Sep 5 06:04:37.879975 kubelet[2683]: I0905 06:04:37.879887 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5adb5536-34cb-4e56-969c-b6939475e000-lib-modules\") pod \"kube-proxy-rs2w2\" (UID: \"5adb5536-34cb-4e56-969c-b6939475e000\") " pod="kube-system/kube-proxy-rs2w2" Sep 5 06:04:37.879975 kubelet[2683]: I0905 06:04:37.879903 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fndgs\" (UniqueName: \"kubernetes.io/projected/5adb5536-34cb-4e56-969c-b6939475e000-kube-api-access-fndgs\") pod \"kube-proxy-rs2w2\" (UID: \"5adb5536-34cb-4e56-969c-b6939475e000\") " pod="kube-system/kube-proxy-rs2w2" Sep 5 06:04:37.988683 kubelet[2683]: E0905 06:04:37.988619 2683 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 5 06:04:37.988683 kubelet[2683]: E0905 06:04:37.988651 2683 projected.go:194] Error preparing data for projected volume kube-api-access-fndgs for pod kube-system/kube-proxy-rs2w2: configmap "kube-root-ca.crt" not found Sep 5 06:04:37.988820 kubelet[2683]: E0905 06:04:37.988717 2683 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5adb5536-34cb-4e56-969c-b6939475e000-kube-api-access-fndgs podName:5adb5536-34cb-4e56-969c-b6939475e000 nodeName:}" failed. No retries permitted until 2025-09-05 06:04:38.488698641 +0000 UTC m=+7.215084302 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fndgs" (UniqueName: "kubernetes.io/projected/5adb5536-34cb-4e56-969c-b6939475e000-kube-api-access-fndgs") pod "kube-proxy-rs2w2" (UID: "5adb5536-34cb-4e56-969c-b6939475e000") : configmap "kube-root-ca.crt" not found Sep 5 06:04:38.258955 systemd[1]: Created slice kubepods-besteffort-pod7f0f69ad_e5cd_4e69_b8df_e0efb2a3b6f4.slice - libcontainer container kubepods-besteffort-pod7f0f69ad_e5cd_4e69_b8df_e0efb2a3b6f4.slice. Sep 5 06:04:38.283675 kubelet[2683]: I0905 06:04:38.283646 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7f0f69ad-e5cd-4e69-b8df-e0efb2a3b6f4-var-lib-calico\") pod \"tigera-operator-755d956888-pb46r\" (UID: \"7f0f69ad-e5cd-4e69-b8df-e0efb2a3b6f4\") " pod="tigera-operator/tigera-operator-755d956888-pb46r" Sep 5 06:04:38.284088 kubelet[2683]: I0905 06:04:38.284048 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp2f9\" (UniqueName: \"kubernetes.io/projected/7f0f69ad-e5cd-4e69-b8df-e0efb2a3b6f4-kube-api-access-zp2f9\") pod \"tigera-operator-755d956888-pb46r\" (UID: \"7f0f69ad-e5cd-4e69-b8df-e0efb2a3b6f4\") " pod="tigera-operator/tigera-operator-755d956888-pb46r" Sep 5 06:04:38.562895 containerd[1527]: time="2025-09-05T06:04:38.562755557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-pb46r,Uid:7f0f69ad-e5cd-4e69-b8df-e0efb2a3b6f4,Namespace:tigera-operator,Attempt:0,}" Sep 5 06:04:38.580920 containerd[1527]: time="2025-09-05T06:04:38.580844678Z" level=info msg="connecting to shim c5a772359e5746b6db7a7e73475b2798fa990cd911a77b727c76cae8b2cbacc7" address="unix:///run/containerd/s/c7fbe565919cb14eb9f9c9f0dc9ad8b7ebdcd20af6623b140e35a8f5b7714395" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:04:38.604538 systemd[1]: Started cri-containerd-c5a772359e5746b6db7a7e73475b2798fa990cd911a77b727c76cae8b2cbacc7.scope - libcontainer container c5a772359e5746b6db7a7e73475b2798fa990cd911a77b727c76cae8b2cbacc7. Sep 5 06:04:38.630402 containerd[1527]: time="2025-09-05T06:04:38.630284377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-pb46r,Uid:7f0f69ad-e5cd-4e69-b8df-e0efb2a3b6f4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c5a772359e5746b6db7a7e73475b2798fa990cd911a77b727c76cae8b2cbacc7\"" Sep 5 06:04:38.631884 containerd[1527]: time="2025-09-05T06:04:38.631859184Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 5 06:04:38.757690 containerd[1527]: time="2025-09-05T06:04:38.757639184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rs2w2,Uid:5adb5536-34cb-4e56-969c-b6939475e000,Namespace:kube-system,Attempt:0,}" Sep 5 06:04:38.776077 containerd[1527]: time="2025-09-05T06:04:38.775933505Z" level=info msg="connecting to shim a77a5ce01c375d0974eac52e8d5d1a9bad79d3b5c01b65d53237a4d73d9c2ce3" address="unix:///run/containerd/s/388ab227f8a82a7aeb10638da245141d5c702d7051a23a0a12f232e893ce56c9" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:04:38.800551 systemd[1]: Started cri-containerd-a77a5ce01c375d0974eac52e8d5d1a9bad79d3b5c01b65d53237a4d73d9c2ce3.scope - libcontainer container a77a5ce01c375d0974eac52e8d5d1a9bad79d3b5c01b65d53237a4d73d9c2ce3. Sep 5 06:04:38.825885 containerd[1527]: time="2025-09-05T06:04:38.825606806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rs2w2,Uid:5adb5536-34cb-4e56-969c-b6939475e000,Namespace:kube-system,Attempt:0,} returns sandbox id \"a77a5ce01c375d0974eac52e8d5d1a9bad79d3b5c01b65d53237a4d73d9c2ce3\"" Sep 5 06:04:38.829913 containerd[1527]: time="2025-09-05T06:04:38.829883865Z" level=info msg="CreateContainer within sandbox \"a77a5ce01c375d0974eac52e8d5d1a9bad79d3b5c01b65d53237a4d73d9c2ce3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 5 06:04:38.838032 containerd[1527]: time="2025-09-05T06:04:38.837983821Z" level=info msg="Container ddd074efc0deee6a828965d502914dd83c1bbd9526705644c4ac677f08e6b2a2: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:04:38.844257 containerd[1527]: time="2025-09-05T06:04:38.844202128Z" level=info msg="CreateContainer within sandbox \"a77a5ce01c375d0974eac52e8d5d1a9bad79d3b5c01b65d53237a4d73d9c2ce3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ddd074efc0deee6a828965d502914dd83c1bbd9526705644c4ac677f08e6b2a2\"" Sep 5 06:04:38.844742 containerd[1527]: time="2025-09-05T06:04:38.844712811Z" level=info msg="StartContainer for \"ddd074efc0deee6a828965d502914dd83c1bbd9526705644c4ac677f08e6b2a2\"" Sep 5 06:04:38.846582 containerd[1527]: time="2025-09-05T06:04:38.846548939Z" level=info msg="connecting to shim ddd074efc0deee6a828965d502914dd83c1bbd9526705644c4ac677f08e6b2a2" address="unix:///run/containerd/s/388ab227f8a82a7aeb10638da245141d5c702d7051a23a0a12f232e893ce56c9" protocol=ttrpc version=3 Sep 5 06:04:38.866537 systemd[1]: Started cri-containerd-ddd074efc0deee6a828965d502914dd83c1bbd9526705644c4ac677f08e6b2a2.scope - libcontainer container ddd074efc0deee6a828965d502914dd83c1bbd9526705644c4ac677f08e6b2a2. Sep 5 06:04:38.898048 containerd[1527]: time="2025-09-05T06:04:38.898012008Z" level=info msg="StartContainer for \"ddd074efc0deee6a828965d502914dd83c1bbd9526705644c4ac677f08e6b2a2\" returns successfully" Sep 5 06:04:39.403311 kubelet[2683]: I0905 06:04:39.403200 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rs2w2" podStartSLOduration=2.403184262 podStartE2EDuration="2.403184262s" podCreationTimestamp="2025-09-05 06:04:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:04:39.403086262 +0000 UTC m=+8.129471963" watchObservedRunningTime="2025-09-05 06:04:39.403184262 +0000 UTC m=+8.129569923" Sep 5 06:04:39.645068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202918268.mount: Deactivated successfully. Sep 5 06:04:40.056715 containerd[1527]: time="2025-09-05T06:04:40.056664211Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:40.057210 containerd[1527]: time="2025-09-05T06:04:40.057184533Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=22152365" Sep 5 06:04:40.057899 containerd[1527]: time="2025-09-05T06:04:40.057870416Z" level=info msg="ImageCreate event name:\"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:40.060160 containerd[1527]: time="2025-09-05T06:04:40.060129785Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:40.060786 containerd[1527]: time="2025-09-05T06:04:40.060753987Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"22148360\" in 1.428863523s" Sep 5 06:04:40.060786 containerd[1527]: time="2025-09-05T06:04:40.060783987Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:dd2e197838b00861b08ae5f480dfbfb9a519722e35ced99346315722309cbe9f\"" Sep 5 06:04:40.064814 containerd[1527]: time="2025-09-05T06:04:40.064735403Z" level=info msg="CreateContainer within sandbox \"c5a772359e5746b6db7a7e73475b2798fa990cd911a77b727c76cae8b2cbacc7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 5 06:04:40.071334 containerd[1527]: time="2025-09-05T06:04:40.070830187Z" level=info msg="Container 3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:04:40.074574 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2677186632.mount: Deactivated successfully. Sep 5 06:04:40.076491 containerd[1527]: time="2025-09-05T06:04:40.076461969Z" level=info msg="CreateContainer within sandbox \"c5a772359e5746b6db7a7e73475b2798fa990cd911a77b727c76cae8b2cbacc7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638\"" Sep 5 06:04:40.077026 containerd[1527]: time="2025-09-05T06:04:40.076988571Z" level=info msg="StartContainer for \"3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638\"" Sep 5 06:04:40.077827 containerd[1527]: time="2025-09-05T06:04:40.077724254Z" level=info msg="connecting to shim 3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638" address="unix:///run/containerd/s/c7fbe565919cb14eb9f9c9f0dc9ad8b7ebdcd20af6623b140e35a8f5b7714395" protocol=ttrpc version=3 Sep 5 06:04:40.096530 systemd[1]: Started cri-containerd-3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638.scope - libcontainer container 3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638. Sep 5 06:04:40.138539 containerd[1527]: time="2025-09-05T06:04:40.138488251Z" level=info msg="StartContainer for \"3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638\" returns successfully" Sep 5 06:04:40.878895 kubelet[2683]: I0905 06:04:40.878828 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-pb46r" podStartSLOduration=1.448758937 podStartE2EDuration="2.878808664s" podCreationTimestamp="2025-09-05 06:04:38 +0000 UTC" firstStartedPulling="2025-09-05 06:04:38.631480343 +0000 UTC m=+7.357866004" lastFinishedPulling="2025-09-05 06:04:40.06153007 +0000 UTC m=+8.787915731" observedRunningTime="2025-09-05 06:04:40.407666063 +0000 UTC m=+9.134051724" watchObservedRunningTime="2025-09-05 06:04:40.878808664 +0000 UTC m=+9.605194325" Sep 5 06:04:42.079908 systemd[1]: cri-containerd-3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638.scope: Deactivated successfully. Sep 5 06:04:42.110121 containerd[1527]: time="2025-09-05T06:04:42.110070980Z" level=info msg="received exit event container_id:\"3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638\" id:\"3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638\" pid:3020 exit_status:1 exited_at:{seconds:1757052282 nanos:100401266}" Sep 5 06:04:42.110845 containerd[1527]: time="2025-09-05T06:04:42.110265540Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638\" id:\"3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638\" pid:3020 exit_status:1 exited_at:{seconds:1757052282 nanos:100401266}" Sep 5 06:04:42.180430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638-rootfs.mount: Deactivated successfully. Sep 5 06:04:42.421981 kubelet[2683]: I0905 06:04:42.421938 2683 scope.go:117] "RemoveContainer" containerID="3aa4024e245a267d5d05fb13e3f7e4e57f3039396ee08f38f154139b89ae2638" Sep 5 06:04:42.425142 containerd[1527]: time="2025-09-05T06:04:42.425104582Z" level=info msg="CreateContainer within sandbox \"c5a772359e5746b6db7a7e73475b2798fa990cd911a77b727c76cae8b2cbacc7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Sep 5 06:04:42.492537 containerd[1527]: time="2025-09-05T06:04:42.492402173Z" level=info msg="Container 8ec030b76d365ad2224a52796874ad1a4b2d560eb19940ef84fe630783e1d71c: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:04:42.497175 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761741988.mount: Deactivated successfully. Sep 5 06:04:42.502378 containerd[1527]: time="2025-09-05T06:04:42.502256487Z" level=info msg="CreateContainer within sandbox \"c5a772359e5746b6db7a7e73475b2798fa990cd911a77b727c76cae8b2cbacc7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"8ec030b76d365ad2224a52796874ad1a4b2d560eb19940ef84fe630783e1d71c\"" Sep 5 06:04:42.502780 containerd[1527]: time="2025-09-05T06:04:42.502751768Z" level=info msg="StartContainer for \"8ec030b76d365ad2224a52796874ad1a4b2d560eb19940ef84fe630783e1d71c\"" Sep 5 06:04:42.505347 containerd[1527]: time="2025-09-05T06:04:42.505315177Z" level=info msg="connecting to shim 8ec030b76d365ad2224a52796874ad1a4b2d560eb19940ef84fe630783e1d71c" address="unix:///run/containerd/s/c7fbe565919cb14eb9f9c9f0dc9ad8b7ebdcd20af6623b140e35a8f5b7714395" protocol=ttrpc version=3 Sep 5 06:04:42.528533 systemd[1]: Started cri-containerd-8ec030b76d365ad2224a52796874ad1a4b2d560eb19940ef84fe630783e1d71c.scope - libcontainer container 8ec030b76d365ad2224a52796874ad1a4b2d560eb19940ef84fe630783e1d71c. Sep 5 06:04:42.571984 containerd[1527]: time="2025-09-05T06:04:42.571942526Z" level=info msg="StartContainer for \"8ec030b76d365ad2224a52796874ad1a4b2d560eb19940ef84fe630783e1d71c\" returns successfully" Sep 5 06:04:45.337599 sudo[1740]: pam_unix(sudo:session): session closed for user root Sep 5 06:04:45.339373 sshd[1739]: Connection closed by 10.0.0.1 port 44038 Sep 5 06:04:45.339827 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Sep 5 06:04:45.343285 systemd[1]: sshd@6-10.0.0.144:22-10.0.0.1:44038.service: Deactivated successfully. Sep 5 06:04:45.345128 systemd[1]: session-7.scope: Deactivated successfully. Sep 5 06:04:45.345361 systemd[1]: session-7.scope: Consumed 7.213s CPU time, 226.2M memory peak. Sep 5 06:04:45.346242 systemd-logind[1502]: Session 7 logged out. Waiting for processes to exit. Sep 5 06:04:45.347674 systemd-logind[1502]: Removed session 7. Sep 5 06:04:46.045491 update_engine[1506]: I20250905 06:04:46.045428 1506 update_attempter.cc:509] Updating boot flags... Sep 5 06:04:51.507073 systemd[1]: Created slice kubepods-besteffort-podca1a83c5_ea78_4625_b5ec_2a08a7d1eb16.slice - libcontainer container kubepods-besteffort-podca1a83c5_ea78_4625_b5ec_2a08a7d1eb16.slice. Sep 5 06:04:51.576475 kubelet[2683]: I0905 06:04:51.576431 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ca1a83c5-ea78-4625-b5ec-2a08a7d1eb16-typha-certs\") pod \"calico-typha-7697cbffd-9z9pk\" (UID: \"ca1a83c5-ea78-4625-b5ec-2a08a7d1eb16\") " pod="calico-system/calico-typha-7697cbffd-9z9pk" Sep 5 06:04:51.576926 kubelet[2683]: I0905 06:04:51.576904 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptfm6\" (UniqueName: \"kubernetes.io/projected/ca1a83c5-ea78-4625-b5ec-2a08a7d1eb16-kube-api-access-ptfm6\") pod \"calico-typha-7697cbffd-9z9pk\" (UID: \"ca1a83c5-ea78-4625-b5ec-2a08a7d1eb16\") " pod="calico-system/calico-typha-7697cbffd-9z9pk" Sep 5 06:04:51.577050 kubelet[2683]: I0905 06:04:51.577036 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ca1a83c5-ea78-4625-b5ec-2a08a7d1eb16-tigera-ca-bundle\") pod \"calico-typha-7697cbffd-9z9pk\" (UID: \"ca1a83c5-ea78-4625-b5ec-2a08a7d1eb16\") " pod="calico-system/calico-typha-7697cbffd-9z9pk" Sep 5 06:04:51.712990 systemd[1]: Created slice kubepods-besteffort-podddbb96d7_f756_4928_a7b8_9a198dd3c8ee.slice - libcontainer container kubepods-besteffort-podddbb96d7_f756_4928_a7b8_9a198dd3c8ee.slice. Sep 5 06:04:51.778486 kubelet[2683]: I0905 06:04:51.778336 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-var-run-calico\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.778486 kubelet[2683]: I0905 06:04:51.778412 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-cni-net-dir\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.778486 kubelet[2683]: I0905 06:04:51.778433 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-cni-log-dir\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.778486 kubelet[2683]: I0905 06:04:51.778448 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-tigera-ca-bundle\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.778486 kubelet[2683]: I0905 06:04:51.778466 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-flexvol-driver-host\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.778677 kubelet[2683]: I0905 06:04:51.778484 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-policysync\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.778677 kubelet[2683]: I0905 06:04:51.778500 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-var-lib-calico\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.778677 kubelet[2683]: I0905 06:04:51.778515 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmqph\" (UniqueName: \"kubernetes.io/projected/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-kube-api-access-fmqph\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.778677 kubelet[2683]: I0905 06:04:51.778532 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-lib-modules\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.778677 kubelet[2683]: I0905 06:04:51.778546 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-node-certs\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.779603 kubelet[2683]: I0905 06:04:51.778560 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-xtables-lock\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.779603 kubelet[2683]: I0905 06:04:51.778578 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ddbb96d7-f756-4928-a7b8-9a198dd3c8ee-cni-bin-dir\") pod \"calico-node-766ss\" (UID: \"ddbb96d7-f756-4928-a7b8-9a198dd3c8ee\") " pod="calico-system/calico-node-766ss" Sep 5 06:04:51.819957 containerd[1527]: time="2025-09-05T06:04:51.819697789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7697cbffd-9z9pk,Uid:ca1a83c5-ea78-4625-b5ec-2a08a7d1eb16,Namespace:calico-system,Attempt:0,}" Sep 5 06:04:51.849563 containerd[1527]: time="2025-09-05T06:04:51.849481446Z" level=info msg="connecting to shim 19ba19a752245b00d39b57810d84034cec8452fc698d74d2c57d1834c3f4c3f5" address="unix:///run/containerd/s/5584c4c48fd534cc561ad05b834f3f84cd18f2cecd82ba2ad0e1f787f328e725" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:04:51.878578 systemd[1]: Started cri-containerd-19ba19a752245b00d39b57810d84034cec8452fc698d74d2c57d1834c3f4c3f5.scope - libcontainer container 19ba19a752245b00d39b57810d84034cec8452fc698d74d2c57d1834c3f4c3f5. Sep 5 06:04:51.883306 kubelet[2683]: E0905 06:04:51.882332 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:51.883306 kubelet[2683]: W0905 06:04:51.882471 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:51.886025 kubelet[2683]: E0905 06:04:51.885603 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:51.888746 kubelet[2683]: E0905 06:04:51.888715 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:51.888746 kubelet[2683]: W0905 06:04:51.888739 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:51.888857 kubelet[2683]: E0905 06:04:51.888759 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:51.892514 kubelet[2683]: E0905 06:04:51.892492 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:51.892514 kubelet[2683]: W0905 06:04:51.892511 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:51.892622 kubelet[2683]: E0905 06:04:51.892527 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:51.918168 containerd[1527]: time="2025-09-05T06:04:51.918127898Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7697cbffd-9z9pk,Uid:ca1a83c5-ea78-4625-b5ec-2a08a7d1eb16,Namespace:calico-system,Attempt:0,} returns sandbox id \"19ba19a752245b00d39b57810d84034cec8452fc698d74d2c57d1834c3f4c3f5\"" Sep 5 06:04:51.919805 containerd[1527]: time="2025-09-05T06:04:51.919776061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 5 06:04:51.968665 kubelet[2683]: E0905 06:04:51.968603 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kl9t2" podUID="4cde16ae-a1c6-4825-89e0-2698e4deec05" Sep 5 06:04:52.016370 containerd[1527]: time="2025-09-05T06:04:52.016327765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-766ss,Uid:ddbb96d7-f756-4928-a7b8-9a198dd3c8ee,Namespace:calico-system,Attempt:0,}" Sep 5 06:04:52.057698 containerd[1527]: time="2025-09-05T06:04:52.057586199Z" level=info msg="connecting to shim c77aa8228a52ed92fddf0520ef0225c3876c8f2287e2108e6d56051be34a1ee4" address="unix:///run/containerd/s/b67bbf7b7df39cdf4c0d9fa2a581ee08afd6a0862a22cc86aceb5178b88876cd" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:04:52.064440 kubelet[2683]: E0905 06:04:52.064322 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.064440 kubelet[2683]: W0905 06:04:52.064425 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.064748 kubelet[2683]: E0905 06:04:52.064449 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.064967 kubelet[2683]: E0905 06:04:52.064907 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.073402 kubelet[2683]: W0905 06:04:52.064925 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.073402 kubelet[2683]: E0905 06:04:52.073150 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.073637 kubelet[2683]: E0905 06:04:52.073479 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.073637 kubelet[2683]: W0905 06:04:52.073518 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.073637 kubelet[2683]: E0905 06:04:52.073533 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.074095 kubelet[2683]: E0905 06:04:52.073765 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.074095 kubelet[2683]: W0905 06:04:52.073798 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.074095 kubelet[2683]: E0905 06:04:52.073810 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.074095 kubelet[2683]: E0905 06:04:52.073985 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.074095 kubelet[2683]: W0905 06:04:52.073995 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.074095 kubelet[2683]: E0905 06:04:52.074003 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.074424 kubelet[2683]: E0905 06:04:52.074378 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.074424 kubelet[2683]: W0905 06:04:52.074421 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.074499 kubelet[2683]: E0905 06:04:52.074434 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.074627 kubelet[2683]: E0905 06:04:52.074606 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.074627 kubelet[2683]: W0905 06:04:52.074619 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.074627 kubelet[2683]: E0905 06:04:52.074628 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.074777 kubelet[2683]: E0905 06:04:52.074763 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.074777 kubelet[2683]: W0905 06:04:52.074773 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.074830 kubelet[2683]: E0905 06:04:52.074780 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.074937 kubelet[2683]: E0905 06:04:52.074922 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.074937 kubelet[2683]: W0905 06:04:52.074935 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.074985 kubelet[2683]: E0905 06:04:52.074943 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.075080 kubelet[2683]: E0905 06:04:52.075068 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.075080 kubelet[2683]: W0905 06:04:52.075078 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.075121 kubelet[2683]: E0905 06:04:52.075086 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.075228 kubelet[2683]: E0905 06:04:52.075207 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.075253 kubelet[2683]: W0905 06:04:52.075227 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.075253 kubelet[2683]: E0905 06:04:52.075237 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.076375 kubelet[2683]: E0905 06:04:52.075777 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.076375 kubelet[2683]: W0905 06:04:52.075794 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.076375 kubelet[2683]: E0905 06:04:52.075806 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.076375 kubelet[2683]: E0905 06:04:52.075989 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.076375 kubelet[2683]: W0905 06:04:52.076000 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.076375 kubelet[2683]: E0905 06:04:52.076010 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.076375 kubelet[2683]: E0905 06:04:52.076140 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.076375 kubelet[2683]: W0905 06:04:52.076149 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.076375 kubelet[2683]: E0905 06:04:52.076156 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.076375 kubelet[2683]: E0905 06:04:52.076291 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.076628 kubelet[2683]: W0905 06:04:52.076304 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.076628 kubelet[2683]: E0905 06:04:52.076312 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.076628 kubelet[2683]: E0905 06:04:52.076460 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.076628 kubelet[2683]: W0905 06:04:52.076468 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.076628 kubelet[2683]: E0905 06:04:52.076476 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.076721 kubelet[2683]: E0905 06:04:52.076637 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.076721 kubelet[2683]: W0905 06:04:52.076646 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.076721 kubelet[2683]: E0905 06:04:52.076655 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.076857 kubelet[2683]: E0905 06:04:52.076838 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.076857 kubelet[2683]: W0905 06:04:52.076850 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.076905 kubelet[2683]: E0905 06:04:52.076859 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.077022 kubelet[2683]: E0905 06:04:52.077009 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.077022 kubelet[2683]: W0905 06:04:52.077020 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.077065 kubelet[2683]: E0905 06:04:52.077028 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.077172 kubelet[2683]: E0905 06:04:52.077163 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.077195 kubelet[2683]: W0905 06:04:52.077172 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.077195 kubelet[2683]: E0905 06:04:52.077180 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.081406 kubelet[2683]: E0905 06:04:52.080501 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.081406 kubelet[2683]: W0905 06:04:52.080518 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.081406 kubelet[2683]: E0905 06:04:52.080531 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.081406 kubelet[2683]: I0905 06:04:52.080559 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4cde16ae-a1c6-4825-89e0-2698e4deec05-socket-dir\") pod \"csi-node-driver-kl9t2\" (UID: \"4cde16ae-a1c6-4825-89e0-2698e4deec05\") " pod="calico-system/csi-node-driver-kl9t2" Sep 5 06:04:52.081406 kubelet[2683]: E0905 06:04:52.080705 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.081406 kubelet[2683]: W0905 06:04:52.080714 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.081406 kubelet[2683]: E0905 06:04:52.080722 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.081406 kubelet[2683]: I0905 06:04:52.080742 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjpbf\" (UniqueName: \"kubernetes.io/projected/4cde16ae-a1c6-4825-89e0-2698e4deec05-kube-api-access-bjpbf\") pod \"csi-node-driver-kl9t2\" (UID: \"4cde16ae-a1c6-4825-89e0-2698e4deec05\") " pod="calico-system/csi-node-driver-kl9t2" Sep 5 06:04:52.081406 kubelet[2683]: E0905 06:04:52.080922 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.081628 kubelet[2683]: W0905 06:04:52.080929 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.081628 kubelet[2683]: E0905 06:04:52.080939 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.081628 kubelet[2683]: I0905 06:04:52.080953 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4cde16ae-a1c6-4825-89e0-2698e4deec05-kubelet-dir\") pod \"csi-node-driver-kl9t2\" (UID: \"4cde16ae-a1c6-4825-89e0-2698e4deec05\") " pod="calico-system/csi-node-driver-kl9t2" Sep 5 06:04:52.081628 kubelet[2683]: E0905 06:04:52.081101 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.081628 kubelet[2683]: W0905 06:04:52.081109 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.081628 kubelet[2683]: E0905 06:04:52.081116 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.081628 kubelet[2683]: I0905 06:04:52.081130 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4cde16ae-a1c6-4825-89e0-2698e4deec05-varrun\") pod \"csi-node-driver-kl9t2\" (UID: \"4cde16ae-a1c6-4825-89e0-2698e4deec05\") " pod="calico-system/csi-node-driver-kl9t2" Sep 5 06:04:52.081628 kubelet[2683]: E0905 06:04:52.081282 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.081765 kubelet[2683]: W0905 06:04:52.081290 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.081765 kubelet[2683]: E0905 06:04:52.081298 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.081765 kubelet[2683]: I0905 06:04:52.081312 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4cde16ae-a1c6-4825-89e0-2698e4deec05-registration-dir\") pod \"csi-node-driver-kl9t2\" (UID: \"4cde16ae-a1c6-4825-89e0-2698e4deec05\") " pod="calico-system/csi-node-driver-kl9t2" Sep 5 06:04:52.081765 kubelet[2683]: E0905 06:04:52.081499 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.081765 kubelet[2683]: W0905 06:04:52.081508 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.081765 kubelet[2683]: E0905 06:04:52.081516 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.081765 kubelet[2683]: E0905 06:04:52.081649 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.081765 kubelet[2683]: W0905 06:04:52.081656 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.081765 kubelet[2683]: E0905 06:04:52.081664 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.081921 kubelet[2683]: E0905 06:04:52.081805 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.081921 kubelet[2683]: W0905 06:04:52.081812 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.081921 kubelet[2683]: E0905 06:04:52.081819 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.081973 kubelet[2683]: E0905 06:04:52.081955 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.081973 kubelet[2683]: W0905 06:04:52.081962 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.081973 kubelet[2683]: E0905 06:04:52.081968 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.083171 kubelet[2683]: E0905 06:04:52.082117 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.083171 kubelet[2683]: W0905 06:04:52.082126 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.083171 kubelet[2683]: E0905 06:04:52.082134 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.083171 kubelet[2683]: E0905 06:04:52.082292 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.083171 kubelet[2683]: W0905 06:04:52.082301 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.083171 kubelet[2683]: E0905 06:04:52.082309 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.083171 kubelet[2683]: E0905 06:04:52.082515 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.083171 kubelet[2683]: W0905 06:04:52.082524 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.083171 kubelet[2683]: E0905 06:04:52.082533 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.083171 kubelet[2683]: E0905 06:04:52.082689 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.083997 kubelet[2683]: W0905 06:04:52.082698 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.083997 kubelet[2683]: E0905 06:04:52.082715 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.083997 kubelet[2683]: E0905 06:04:52.082898 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.083997 kubelet[2683]: W0905 06:04:52.082915 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.083997 kubelet[2683]: E0905 06:04:52.082925 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.083997 kubelet[2683]: E0905 06:04:52.083080 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.083997 kubelet[2683]: W0905 06:04:52.083103 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.083997 kubelet[2683]: E0905 06:04:52.083111 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.119708 systemd[1]: Started cri-containerd-c77aa8228a52ed92fddf0520ef0225c3876c8f2287e2108e6d56051be34a1ee4.scope - libcontainer container c77aa8228a52ed92fddf0520ef0225c3876c8f2287e2108e6d56051be34a1ee4. Sep 5 06:04:52.164804 containerd[1527]: time="2025-09-05T06:04:52.164763592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-766ss,Uid:ddbb96d7-f756-4928-a7b8-9a198dd3c8ee,Namespace:calico-system,Attempt:0,} returns sandbox id \"c77aa8228a52ed92fddf0520ef0225c3876c8f2287e2108e6d56051be34a1ee4\"" Sep 5 06:04:52.182363 kubelet[2683]: E0905 06:04:52.182318 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.182363 kubelet[2683]: W0905 06:04:52.182345 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.182363 kubelet[2683]: E0905 06:04:52.182367 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.182783 kubelet[2683]: E0905 06:04:52.182759 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.182783 kubelet[2683]: W0905 06:04:52.182778 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.182850 kubelet[2683]: E0905 06:04:52.182791 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.183052 kubelet[2683]: E0905 06:04:52.183014 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.183052 kubelet[2683]: W0905 06:04:52.183026 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.183052 kubelet[2683]: E0905 06:04:52.183035 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.183219 kubelet[2683]: E0905 06:04:52.183188 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.183219 kubelet[2683]: W0905 06:04:52.183198 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.183219 kubelet[2683]: E0905 06:04:52.183206 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.183594 kubelet[2683]: E0905 06:04:52.183563 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.183594 kubelet[2683]: W0905 06:04:52.183579 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.183594 kubelet[2683]: E0905 06:04:52.183590 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.184278 kubelet[2683]: E0905 06:04:52.184248 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.184483 kubelet[2683]: W0905 06:04:52.184462 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.184517 kubelet[2683]: E0905 06:04:52.184488 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.185143 kubelet[2683]: E0905 06:04:52.185026 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.185143 kubelet[2683]: W0905 06:04:52.185141 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.185225 kubelet[2683]: E0905 06:04:52.185156 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.186540 kubelet[2683]: E0905 06:04:52.186498 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.186540 kubelet[2683]: W0905 06:04:52.186527 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.186540 kubelet[2683]: E0905 06:04:52.186541 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.187327 kubelet[2683]: E0905 06:04:52.187306 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.187327 kubelet[2683]: W0905 06:04:52.187323 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.187433 kubelet[2683]: E0905 06:04:52.187336 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.187656 kubelet[2683]: E0905 06:04:52.187639 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.187656 kubelet[2683]: W0905 06:04:52.187653 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.187701 kubelet[2683]: E0905 06:04:52.187665 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.187922 kubelet[2683]: E0905 06:04:52.187907 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.187922 kubelet[2683]: W0905 06:04:52.187920 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.187971 kubelet[2683]: E0905 06:04:52.187931 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.188127 kubelet[2683]: E0905 06:04:52.188115 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.188171 kubelet[2683]: W0905 06:04:52.188127 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.188171 kubelet[2683]: E0905 06:04:52.188136 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.188433 kubelet[2683]: E0905 06:04:52.188348 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.188514 kubelet[2683]: W0905 06:04:52.188436 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.188514 kubelet[2683]: E0905 06:04:52.188449 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.188860 kubelet[2683]: E0905 06:04:52.188843 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.188882 kubelet[2683]: W0905 06:04:52.188858 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.188882 kubelet[2683]: E0905 06:04:52.188873 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.189149 kubelet[2683]: E0905 06:04:52.189074 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.189174 kubelet[2683]: W0905 06:04:52.189150 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.189174 kubelet[2683]: E0905 06:04:52.189162 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.189479 kubelet[2683]: E0905 06:04:52.189464 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.189503 kubelet[2683]: W0905 06:04:52.189479 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.189503 kubelet[2683]: E0905 06:04:52.189489 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.189712 kubelet[2683]: E0905 06:04:52.189698 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.189738 kubelet[2683]: W0905 06:04:52.189711 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.189738 kubelet[2683]: E0905 06:04:52.189720 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.189880 kubelet[2683]: E0905 06:04:52.189869 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.189906 kubelet[2683]: W0905 06:04:52.189880 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.189906 kubelet[2683]: E0905 06:04:52.189889 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.190183 kubelet[2683]: E0905 06:04:52.190168 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.190217 kubelet[2683]: W0905 06:04:52.190183 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.190609 kubelet[2683]: E0905 06:04:52.190582 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.191875 kubelet[2683]: E0905 06:04:52.191847 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.191875 kubelet[2683]: W0905 06:04:52.191865 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.191875 kubelet[2683]: E0905 06:04:52.191877 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.192085 kubelet[2683]: E0905 06:04:52.192071 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.192085 kubelet[2683]: W0905 06:04:52.192082 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.192130 kubelet[2683]: E0905 06:04:52.192091 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.192272 kubelet[2683]: E0905 06:04:52.192256 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.192272 kubelet[2683]: W0905 06:04:52.192269 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.192314 kubelet[2683]: E0905 06:04:52.192280 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.192464 kubelet[2683]: E0905 06:04:52.192449 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.192493 kubelet[2683]: W0905 06:04:52.192463 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.192493 kubelet[2683]: E0905 06:04:52.192472 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.192661 kubelet[2683]: E0905 06:04:52.192648 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.192682 kubelet[2683]: W0905 06:04:52.192661 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.192682 kubelet[2683]: E0905 06:04:52.192670 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.193699 kubelet[2683]: E0905 06:04:52.193062 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.193699 kubelet[2683]: W0905 06:04:52.193077 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.193699 kubelet[2683]: E0905 06:04:52.193087 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.203342 kubelet[2683]: E0905 06:04:52.203316 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:52.203342 kubelet[2683]: W0905 06:04:52.203333 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:52.203450 kubelet[2683]: E0905 06:04:52.203361 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:52.931527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292762675.mount: Deactivated successfully. Sep 5 06:04:53.289619 containerd[1527]: time="2025-09-05T06:04:53.289500745Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:53.290087 containerd[1527]: time="2025-09-05T06:04:53.290038066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33105775" Sep 5 06:04:53.291178 containerd[1527]: time="2025-09-05T06:04:53.291039268Z" level=info msg="ImageCreate event name:\"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:53.292857 containerd[1527]: time="2025-09-05T06:04:53.292812871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:53.293951 containerd[1527]: time="2025-09-05T06:04:53.293722953Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"33105629\" in 1.373913212s" Sep 5 06:04:53.293951 containerd[1527]: time="2025-09-05T06:04:53.293929393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:6a1496fdc48cc0b9ab3c10aef777497484efac5df9efbfbbdf9775e9583645cb\"" Sep 5 06:04:53.294845 containerd[1527]: time="2025-09-05T06:04:53.294746274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 5 06:04:53.301931 containerd[1527]: time="2025-09-05T06:04:53.301884006Z" level=info msg="CreateContainer within sandbox \"19ba19a752245b00d39b57810d84034cec8452fc698d74d2c57d1834c3f4c3f5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 5 06:04:53.306913 containerd[1527]: time="2025-09-05T06:04:53.306880535Z" level=info msg="Container 946c4fdda99bb5b9a0964a6069fafc8ec054457f12b533892560c23f3e694190: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:04:53.327512 containerd[1527]: time="2025-09-05T06:04:53.327364369Z" level=info msg="CreateContainer within sandbox \"19ba19a752245b00d39b57810d84034cec8452fc698d74d2c57d1834c3f4c3f5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"946c4fdda99bb5b9a0964a6069fafc8ec054457f12b533892560c23f3e694190\"" Sep 5 06:04:53.332294 containerd[1527]: time="2025-09-05T06:04:53.332264218Z" level=info msg="StartContainer for \"946c4fdda99bb5b9a0964a6069fafc8ec054457f12b533892560c23f3e694190\"" Sep 5 06:04:53.334443 containerd[1527]: time="2025-09-05T06:04:53.334414301Z" level=info msg="connecting to shim 946c4fdda99bb5b9a0964a6069fafc8ec054457f12b533892560c23f3e694190" address="unix:///run/containerd/s/5584c4c48fd534cc561ad05b834f3f84cd18f2cecd82ba2ad0e1f787f328e725" protocol=ttrpc version=3 Sep 5 06:04:53.357544 systemd[1]: Started cri-containerd-946c4fdda99bb5b9a0964a6069fafc8ec054457f12b533892560c23f3e694190.scope - libcontainer container 946c4fdda99bb5b9a0964a6069fafc8ec054457f12b533892560c23f3e694190. Sep 5 06:04:53.391800 containerd[1527]: time="2025-09-05T06:04:53.391762718Z" level=info msg="StartContainer for \"946c4fdda99bb5b9a0964a6069fafc8ec054457f12b533892560c23f3e694190\" returns successfully" Sep 5 06:04:53.487738 kubelet[2683]: E0905 06:04:53.487144 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.487738 kubelet[2683]: W0905 06:04:53.487728 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.488090 kubelet[2683]: E0905 06:04:53.487756 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.489054 kubelet[2683]: E0905 06:04:53.488700 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.489129 kubelet[2683]: W0905 06:04:53.489058 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.489129 kubelet[2683]: E0905 06:04:53.489113 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.490149 kubelet[2683]: E0905 06:04:53.490125 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.490149 kubelet[2683]: W0905 06:04:53.490142 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.490252 kubelet[2683]: E0905 06:04:53.490155 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.490674 kubelet[2683]: E0905 06:04:53.490647 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.490674 kubelet[2683]: W0905 06:04:53.490662 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.490674 kubelet[2683]: E0905 06:04:53.490673 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.491686 kubelet[2683]: E0905 06:04:53.491657 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.491686 kubelet[2683]: W0905 06:04:53.491672 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.491686 kubelet[2683]: E0905 06:04:53.491684 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.492409 kubelet[2683]: E0905 06:04:53.492366 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.492409 kubelet[2683]: W0905 06:04:53.492393 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.492409 kubelet[2683]: E0905 06:04:53.492407 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.492972 kubelet[2683]: E0905 06:04:53.492933 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.493224 kubelet[2683]: W0905 06:04:53.493182 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.493224 kubelet[2683]: E0905 06:04:53.493220 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.493576 kubelet[2683]: E0905 06:04:53.493550 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.493576 kubelet[2683]: W0905 06:04:53.493565 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.493576 kubelet[2683]: E0905 06:04:53.493577 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.494491 kubelet[2683]: E0905 06:04:53.494468 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.494491 kubelet[2683]: W0905 06:04:53.494484 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.494580 kubelet[2683]: E0905 06:04:53.494497 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.494783 kubelet[2683]: E0905 06:04:53.494758 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.494783 kubelet[2683]: W0905 06:04:53.494772 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.494783 kubelet[2683]: E0905 06:04:53.494783 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.496403 kubelet[2683]: E0905 06:04:53.495465 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.496403 kubelet[2683]: W0905 06:04:53.495481 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.496403 kubelet[2683]: E0905 06:04:53.495496 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.496403 kubelet[2683]: E0905 06:04:53.496333 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.496403 kubelet[2683]: W0905 06:04:53.496346 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.496403 kubelet[2683]: E0905 06:04:53.496358 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.496748 kubelet[2683]: E0905 06:04:53.496728 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.497059 kubelet[2683]: W0905 06:04:53.496744 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.497059 kubelet[2683]: E0905 06:04:53.497055 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.497752 kubelet[2683]: E0905 06:04:53.497730 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.497752 kubelet[2683]: W0905 06:04:53.497746 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.497752 kubelet[2683]: E0905 06:04:53.497759 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.498424 kubelet[2683]: E0905 06:04:53.498404 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.498424 kubelet[2683]: W0905 06:04:53.498420 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.498516 kubelet[2683]: E0905 06:04:53.498433 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.498952 kubelet[2683]: E0905 06:04:53.498933 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.498952 kubelet[2683]: W0905 06:04:53.498949 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.499091 kubelet[2683]: E0905 06:04:53.498963 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.499704 kubelet[2683]: E0905 06:04:53.499687 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.499704 kubelet[2683]: W0905 06:04:53.499703 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.499821 kubelet[2683]: E0905 06:04:53.499715 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.499918 kubelet[2683]: E0905 06:04:53.499907 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.499918 kubelet[2683]: W0905 06:04:53.499917 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.499978 kubelet[2683]: E0905 06:04:53.499927 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.500115 kubelet[2683]: E0905 06:04:53.500098 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.500158 kubelet[2683]: W0905 06:04:53.500115 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.500158 kubelet[2683]: E0905 06:04:53.500130 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.500304 kubelet[2683]: E0905 06:04:53.500292 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.500304 kubelet[2683]: W0905 06:04:53.500303 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.500430 kubelet[2683]: E0905 06:04:53.500312 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.500480 kubelet[2683]: E0905 06:04:53.500458 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.500480 kubelet[2683]: W0905 06:04:53.500474 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.500550 kubelet[2683]: E0905 06:04:53.500482 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.500681 kubelet[2683]: E0905 06:04:53.500667 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.500681 kubelet[2683]: W0905 06:04:53.500679 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.500845 kubelet[2683]: E0905 06:04:53.500689 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.500990 kubelet[2683]: E0905 06:04:53.500958 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.501071 kubelet[2683]: W0905 06:04:53.501040 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.501151 kubelet[2683]: E0905 06:04:53.501122 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.501442 kubelet[2683]: E0905 06:04:53.501428 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.501613 kubelet[2683]: W0905 06:04:53.501578 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.501613 kubelet[2683]: E0905 06:04:53.501596 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.501893 kubelet[2683]: E0905 06:04:53.501872 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.502012 kubelet[2683]: W0905 06:04:53.501953 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.502012 kubelet[2683]: E0905 06:04:53.501983 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.502307 kubelet[2683]: E0905 06:04:53.502272 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.502307 kubelet[2683]: W0905 06:04:53.502285 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.502307 kubelet[2683]: E0905 06:04:53.502294 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.502635 kubelet[2683]: E0905 06:04:53.502612 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.502730 kubelet[2683]: W0905 06:04:53.502687 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.502730 kubelet[2683]: E0905 06:04:53.502716 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.503109 kubelet[2683]: E0905 06:04:53.502995 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.503109 kubelet[2683]: W0905 06:04:53.503008 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.503109 kubelet[2683]: E0905 06:04:53.503018 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.503304 kubelet[2683]: E0905 06:04:53.503245 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.503304 kubelet[2683]: W0905 06:04:53.503260 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.503304 kubelet[2683]: E0905 06:04:53.503271 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.503409 kubelet[2683]: E0905 06:04:53.503401 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.503409 kubelet[2683]: W0905 06:04:53.503408 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.503457 kubelet[2683]: E0905 06:04:53.503415 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.503577 kubelet[2683]: E0905 06:04:53.503557 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.503577 kubelet[2683]: W0905 06:04:53.503567 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.503577 kubelet[2683]: E0905 06:04:53.503574 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.503909 kubelet[2683]: E0905 06:04:53.503893 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.503909 kubelet[2683]: W0905 06:04:53.503909 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.503986 kubelet[2683]: E0905 06:04:53.503919 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:53.504491 kubelet[2683]: E0905 06:04:53.504464 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 5 06:04:53.504602 kubelet[2683]: W0905 06:04:53.504590 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 5 06:04:53.504675 kubelet[2683]: E0905 06:04:53.504664 2683 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 5 06:04:54.215405 containerd[1527]: time="2025-09-05T06:04:54.215343366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:54.215773 containerd[1527]: time="2025-09-05T06:04:54.215734007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4266814" Sep 5 06:04:54.216544 containerd[1527]: time="2025-09-05T06:04:54.216515168Z" level=info msg="ImageCreate event name:\"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:54.218726 containerd[1527]: time="2025-09-05T06:04:54.218699052Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:54.219244 containerd[1527]: time="2025-09-05T06:04:54.219199812Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5636015\" in 924.421058ms" Sep 5 06:04:54.219244 containerd[1527]: time="2025-09-05T06:04:54.219241692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:29e6f31ad72882b1b817dd257df6b7981e4d7d31d872b7fe2cf102c6e2af27a5\"" Sep 5 06:04:54.222767 containerd[1527]: time="2025-09-05T06:04:54.222737178Z" level=info msg="CreateContainer within sandbox \"c77aa8228a52ed92fddf0520ef0225c3876c8f2287e2108e6d56051be34a1ee4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 5 06:04:54.230408 containerd[1527]: time="2025-09-05T06:04:54.229433189Z" level=info msg="Container 051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:04:54.236963 containerd[1527]: time="2025-09-05T06:04:54.236921800Z" level=info msg="CreateContainer within sandbox \"c77aa8228a52ed92fddf0520ef0225c3876c8f2287e2108e6d56051be34a1ee4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b\"" Sep 5 06:04:54.237837 containerd[1527]: time="2025-09-05T06:04:54.237810842Z" level=info msg="StartContainer for \"051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b\"" Sep 5 06:04:54.239201 containerd[1527]: time="2025-09-05T06:04:54.239176724Z" level=info msg="connecting to shim 051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b" address="unix:///run/containerd/s/b67bbf7b7df39cdf4c0d9fa2a581ee08afd6a0862a22cc86aceb5178b88876cd" protocol=ttrpc version=3 Sep 5 06:04:54.315551 systemd[1]: Started cri-containerd-051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b.scope - libcontainer container 051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b. Sep 5 06:04:54.353558 containerd[1527]: time="2025-09-05T06:04:54.353517185Z" level=info msg="StartContainer for \"051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b\" returns successfully" Sep 5 06:04:54.362020 systemd[1]: cri-containerd-051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b.scope: Deactivated successfully. Sep 5 06:04:54.367296 containerd[1527]: time="2025-09-05T06:04:54.367156767Z" level=info msg="received exit event container_id:\"051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b\" id:\"051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b\" pid:3443 exited_at:{seconds:1757052294 nanos:366880326}" Sep 5 06:04:54.367424 containerd[1527]: time="2025-09-05T06:04:54.367297407Z" level=info msg="TaskExit event in podsandbox handler container_id:\"051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b\" id:\"051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b\" pid:3443 exited_at:{seconds:1757052294 nanos:366880326}" Sep 5 06:04:54.370682 kubelet[2683]: E0905 06:04:54.370639 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kl9t2" podUID="4cde16ae-a1c6-4825-89e0-2698e4deec05" Sep 5 06:04:54.394219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-051c05254f24d025e318912516e5ea558e47668518c3a3f9aa8b302871b2de1b-rootfs.mount: Deactivated successfully. Sep 5 06:04:54.429996 kubelet[2683]: I0905 06:04:54.429957 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:04:54.479830 kubelet[2683]: I0905 06:04:54.479545 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7697cbffd-9z9pk" podStartSLOduration=2.104183531 podStartE2EDuration="3.479530185s" podCreationTimestamp="2025-09-05 06:04:51 +0000 UTC" firstStartedPulling="2025-09-05 06:04:51.91925698 +0000 UTC m=+20.645642601" lastFinishedPulling="2025-09-05 06:04:53.294603594 +0000 UTC m=+22.020989255" observedRunningTime="2025-09-05 06:04:53.439680479 +0000 UTC m=+22.166066140" watchObservedRunningTime="2025-09-05 06:04:54.479530185 +0000 UTC m=+23.205915806" Sep 5 06:04:55.435439 containerd[1527]: time="2025-09-05T06:04:55.435367615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 5 06:04:56.371261 kubelet[2683]: E0905 06:04:56.371121 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kl9t2" podUID="4cde16ae-a1c6-4825-89e0-2698e4deec05" Sep 5 06:04:58.259296 containerd[1527]: time="2025-09-05T06:04:58.259251066Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:58.260275 containerd[1527]: time="2025-09-05T06:04:58.260004907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=65913477" Sep 5 06:04:58.261008 containerd[1527]: time="2025-09-05T06:04:58.260975428Z" level=info msg="ImageCreate event name:\"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:58.262878 containerd[1527]: time="2025-09-05T06:04:58.262842430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:04:58.263647 containerd[1527]: time="2025-09-05T06:04:58.263622631Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"67282718\" in 2.828205856s" Sep 5 06:04:58.263733 containerd[1527]: time="2025-09-05T06:04:58.263719351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:7077a1dc632ee598cbfa626f9e3c9bca5b20c0d1e1e557995890125b2e8d2e23\"" Sep 5 06:04:58.267747 containerd[1527]: time="2025-09-05T06:04:58.267707596Z" level=info msg="CreateContainer within sandbox \"c77aa8228a52ed92fddf0520ef0225c3876c8f2287e2108e6d56051be34a1ee4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 5 06:04:58.275425 containerd[1527]: time="2025-09-05T06:04:58.274529365Z" level=info msg="Container 9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:04:58.296780 containerd[1527]: time="2025-09-05T06:04:58.296679232Z" level=info msg="CreateContainer within sandbox \"c77aa8228a52ed92fddf0520ef0225c3876c8f2287e2108e6d56051be34a1ee4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6\"" Sep 5 06:04:58.297435 containerd[1527]: time="2025-09-05T06:04:58.297237872Z" level=info msg="StartContainer for \"9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6\"" Sep 5 06:04:58.299040 containerd[1527]: time="2025-09-05T06:04:58.299012555Z" level=info msg="connecting to shim 9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6" address="unix:///run/containerd/s/b67bbf7b7df39cdf4c0d9fa2a581ee08afd6a0862a22cc86aceb5178b88876cd" protocol=ttrpc version=3 Sep 5 06:04:58.323554 systemd[1]: Started cri-containerd-9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6.scope - libcontainer container 9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6. Sep 5 06:04:58.357163 containerd[1527]: time="2025-09-05T06:04:58.357125706Z" level=info msg="StartContainer for \"9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6\" returns successfully" Sep 5 06:04:58.370980 kubelet[2683]: E0905 06:04:58.370907 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-kl9t2" podUID="4cde16ae-a1c6-4825-89e0-2698e4deec05" Sep 5 06:04:58.997477 systemd[1]: cri-containerd-9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6.scope: Deactivated successfully. Sep 5 06:04:58.998419 containerd[1527]: time="2025-09-05T06:04:58.998342730Z" level=info msg="received exit event container_id:\"9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6\" id:\"9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6\" pid:3500 exited_at:{seconds:1757052298 nanos:998100930}" Sep 5 06:04:58.998776 containerd[1527]: time="2025-09-05T06:04:58.998701050Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6\" id:\"9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6\" pid:3500 exited_at:{seconds:1757052298 nanos:998100930}" Sep 5 06:04:58.999511 systemd[1]: cri-containerd-9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6.scope: Consumed 442ms CPU time, 177.8M memory peak, 3.1M read from disk, 165.8M written to disk. Sep 5 06:04:59.015348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fd076f98660ebd410a8fc9c11780c8678c2ac894545e0ff99c9353c29377ca6-rootfs.mount: Deactivated successfully. Sep 5 06:04:59.090776 kubelet[2683]: I0905 06:04:59.090745 2683 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 5 06:04:59.130960 systemd[1]: Created slice kubepods-burstable-pod0c507666_50ba_406b_b8d4_d7774556e6e3.slice - libcontainer container kubepods-burstable-pod0c507666_50ba_406b_b8d4_d7774556e6e3.slice. Sep 5 06:04:59.140296 systemd[1]: Created slice kubepods-burstable-pod18fc31d9_056e_48eb_b2ea_0cfe91c332c1.slice - libcontainer container kubepods-burstable-pod18fc31d9_056e_48eb_b2ea_0cfe91c332c1.slice. Sep 5 06:04:59.141427 kubelet[2683]: I0905 06:04:59.141133 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18fc31d9-056e-48eb-b2ea-0cfe91c332c1-config-volume\") pod \"coredns-674b8bbfcf-cx2nl\" (UID: \"18fc31d9-056e-48eb-b2ea-0cfe91c332c1\") " pod="kube-system/coredns-674b8bbfcf-cx2nl" Sep 5 06:04:59.141427 kubelet[2683]: I0905 06:04:59.141178 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0c507666-50ba-406b-b8d4-d7774556e6e3-config-volume\") pod \"coredns-674b8bbfcf-fftn2\" (UID: \"0c507666-50ba-406b-b8d4-d7774556e6e3\") " pod="kube-system/coredns-674b8bbfcf-fftn2" Sep 5 06:04:59.141427 kubelet[2683]: I0905 06:04:59.141205 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zc74\" (UniqueName: \"kubernetes.io/projected/0c507666-50ba-406b-b8d4-d7774556e6e3-kube-api-access-2zc74\") pod \"coredns-674b8bbfcf-fftn2\" (UID: \"0c507666-50ba-406b-b8d4-d7774556e6e3\") " pod="kube-system/coredns-674b8bbfcf-fftn2" Sep 5 06:04:59.141427 kubelet[2683]: I0905 06:04:59.141229 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2015faee-c21b-4fed-b369-661fd91c6266-calico-apiserver-certs\") pod \"calico-apiserver-7968c94f79-l6q6t\" (UID: \"2015faee-c21b-4fed-b369-661fd91c6266\") " pod="calico-apiserver/calico-apiserver-7968c94f79-l6q6t" Sep 5 06:04:59.141427 kubelet[2683]: I0905 06:04:59.141245 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmhs7\" (UniqueName: \"kubernetes.io/projected/2015faee-c21b-4fed-b369-661fd91c6266-kube-api-access-gmhs7\") pod \"calico-apiserver-7968c94f79-l6q6t\" (UID: \"2015faee-c21b-4fed-b369-661fd91c6266\") " pod="calico-apiserver/calico-apiserver-7968c94f79-l6q6t" Sep 5 06:04:59.141597 kubelet[2683]: I0905 06:04:59.141262 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrnkb\" (UniqueName: \"kubernetes.io/projected/18fc31d9-056e-48eb-b2ea-0cfe91c332c1-kube-api-access-rrnkb\") pod \"coredns-674b8bbfcf-cx2nl\" (UID: \"18fc31d9-056e-48eb-b2ea-0cfe91c332c1\") " pod="kube-system/coredns-674b8bbfcf-cx2nl" Sep 5 06:04:59.146961 systemd[1]: Created slice kubepods-besteffort-pod2015faee_c21b_4fed_b369_661fd91c6266.slice - libcontainer container kubepods-besteffort-pod2015faee_c21b_4fed_b369_661fd91c6266.slice. Sep 5 06:04:59.159268 systemd[1]: Created slice kubepods-besteffort-podc27cfba9_9f26_49f2_945a_fd21b34afb17.slice - libcontainer container kubepods-besteffort-podc27cfba9_9f26_49f2_945a_fd21b34afb17.slice. Sep 5 06:04:59.165255 systemd[1]: Created slice kubepods-besteffort-pode7a18b30_357b_4579_b89d_642cf5908e58.slice - libcontainer container kubepods-besteffort-pode7a18b30_357b_4579_b89d_642cf5908e58.slice. Sep 5 06:04:59.172938 systemd[1]: Created slice kubepods-besteffort-pode4905f2f_9ed5_4229_9af0_720a9e15f973.slice - libcontainer container kubepods-besteffort-pode4905f2f_9ed5_4229_9af0_720a9e15f973.slice. Sep 5 06:04:59.178505 systemd[1]: Created slice kubepods-besteffort-podbcaaad9e_3cf8_479b_bef2_88f1eddc24b8.slice - libcontainer container kubepods-besteffort-podbcaaad9e_3cf8_479b_bef2_88f1eddc24b8.slice. Sep 5 06:04:59.242064 kubelet[2683]: I0905 06:04:59.242020 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4905f2f-9ed5-4229-9af0-720a9e15f973-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-bj7r9\" (UID: \"e4905f2f-9ed5-4229-9af0-720a9e15f973\") " pod="calico-system/goldmane-54d579b49d-bj7r9" Sep 5 06:04:59.242201 kubelet[2683]: I0905 06:04:59.242124 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqjc5\" (UniqueName: \"kubernetes.io/projected/c27cfba9-9f26-49f2-945a-fd21b34afb17-kube-api-access-tqjc5\") pod \"calico-apiserver-7968c94f79-7db7j\" (UID: \"c27cfba9-9f26-49f2-945a-fd21b34afb17\") " pod="calico-apiserver/calico-apiserver-7968c94f79-7db7j" Sep 5 06:04:59.242201 kubelet[2683]: I0905 06:04:59.242141 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e4905f2f-9ed5-4229-9af0-720a9e15f973-goldmane-key-pair\") pod \"goldmane-54d579b49d-bj7r9\" (UID: \"e4905f2f-9ed5-4229-9af0-720a9e15f973\") " pod="calico-system/goldmane-54d579b49d-bj7r9" Sep 5 06:04:59.242201 kubelet[2683]: I0905 06:04:59.242157 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bcaaad9e-3cf8-479b-bef2-88f1eddc24b8-tigera-ca-bundle\") pod \"calico-kube-controllers-55995d5b69-tb6ks\" (UID: \"bcaaad9e-3cf8-479b-bef2-88f1eddc24b8\") " pod="calico-system/calico-kube-controllers-55995d5b69-tb6ks" Sep 5 06:04:59.242201 kubelet[2683]: I0905 06:04:59.242179 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c27cfba9-9f26-49f2-945a-fd21b34afb17-calico-apiserver-certs\") pod \"calico-apiserver-7968c94f79-7db7j\" (UID: \"c27cfba9-9f26-49f2-945a-fd21b34afb17\") " pod="calico-apiserver/calico-apiserver-7968c94f79-7db7j" Sep 5 06:04:59.242304 kubelet[2683]: I0905 06:04:59.242204 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7a18b30-357b-4579-b89d-642cf5908e58-whisker-ca-bundle\") pod \"whisker-69fc6d58f6-zlpwf\" (UID: \"e7a18b30-357b-4579-b89d-642cf5908e58\") " pod="calico-system/whisker-69fc6d58f6-zlpwf" Sep 5 06:04:59.242304 kubelet[2683]: I0905 06:04:59.242225 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66j4z\" (UniqueName: \"kubernetes.io/projected/e7a18b30-357b-4579-b89d-642cf5908e58-kube-api-access-66j4z\") pod \"whisker-69fc6d58f6-zlpwf\" (UID: \"e7a18b30-357b-4579-b89d-642cf5908e58\") " pod="calico-system/whisker-69fc6d58f6-zlpwf" Sep 5 06:04:59.242304 kubelet[2683]: I0905 06:04:59.242242 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e4905f2f-9ed5-4229-9af0-720a9e15f973-config\") pod \"goldmane-54d579b49d-bj7r9\" (UID: \"e4905f2f-9ed5-4229-9af0-720a9e15f973\") " pod="calico-system/goldmane-54d579b49d-bj7r9" Sep 5 06:04:59.242304 kubelet[2683]: I0905 06:04:59.242272 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7a18b30-357b-4579-b89d-642cf5908e58-whisker-backend-key-pair\") pod \"whisker-69fc6d58f6-zlpwf\" (UID: \"e7a18b30-357b-4579-b89d-642cf5908e58\") " pod="calico-system/whisker-69fc6d58f6-zlpwf" Sep 5 06:04:59.242304 kubelet[2683]: I0905 06:04:59.242290 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc2rl\" (UniqueName: \"kubernetes.io/projected/e4905f2f-9ed5-4229-9af0-720a9e15f973-kube-api-access-wc2rl\") pod \"goldmane-54d579b49d-bj7r9\" (UID: \"e4905f2f-9ed5-4229-9af0-720a9e15f973\") " pod="calico-system/goldmane-54d579b49d-bj7r9" Sep 5 06:04:59.242442 kubelet[2683]: I0905 06:04:59.242305 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-624dp\" (UniqueName: \"kubernetes.io/projected/bcaaad9e-3cf8-479b-bef2-88f1eddc24b8-kube-api-access-624dp\") pod \"calico-kube-controllers-55995d5b69-tb6ks\" (UID: \"bcaaad9e-3cf8-479b-bef2-88f1eddc24b8\") " pod="calico-system/calico-kube-controllers-55995d5b69-tb6ks" Sep 5 06:04:59.436406 containerd[1527]: time="2025-09-05T06:04:59.436354112Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fftn2,Uid:0c507666-50ba-406b-b8d4-d7774556e6e3,Namespace:kube-system,Attempt:0,}" Sep 5 06:04:59.444062 containerd[1527]: time="2025-09-05T06:04:59.443853841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cx2nl,Uid:18fc31d9-056e-48eb-b2ea-0cfe91c332c1,Namespace:kube-system,Attempt:0,}" Sep 5 06:04:59.451836 containerd[1527]: time="2025-09-05T06:04:59.451801770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 5 06:04:59.459408 containerd[1527]: time="2025-09-05T06:04:59.457052096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968c94f79-l6q6t,Uid:2015faee-c21b-4fed-b369-661fd91c6266,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:04:59.463658 containerd[1527]: time="2025-09-05T06:04:59.462723303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968c94f79-7db7j,Uid:c27cfba9-9f26-49f2-945a-fd21b34afb17,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:04:59.470366 containerd[1527]: time="2025-09-05T06:04:59.470326031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69fc6d58f6-zlpwf,Uid:e7a18b30-357b-4579-b89d-642cf5908e58,Namespace:calico-system,Attempt:0,}" Sep 5 06:04:59.479013 containerd[1527]: time="2025-09-05T06:04:59.478973881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bj7r9,Uid:e4905f2f-9ed5-4229-9af0-720a9e15f973,Namespace:calico-system,Attempt:0,}" Sep 5 06:04:59.482723 containerd[1527]: time="2025-09-05T06:04:59.482686885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55995d5b69-tb6ks,Uid:bcaaad9e-3cf8-479b-bef2-88f1eddc24b8,Namespace:calico-system,Attempt:0,}" Sep 5 06:04:59.561703 containerd[1527]: time="2025-09-05T06:04:59.561653256Z" level=error msg="Failed to destroy network for sandbox \"732145419a651340a2069afe93f6d0f5fc456ef6db00454e875c6ffad12bceec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.563337 containerd[1527]: time="2025-09-05T06:04:59.563284138Z" level=error msg="Failed to destroy network for sandbox \"9e7f88949afd7df5aef11bcc4a0531feecf4d1e2c6014288692272e8bd44aeb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.564249 containerd[1527]: time="2025-09-05T06:04:59.564207579Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69fc6d58f6-zlpwf,Uid:e7a18b30-357b-4579-b89d-642cf5908e58,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"732145419a651340a2069afe93f6d0f5fc456ef6db00454e875c6ffad12bceec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.564684 kubelet[2683]: E0905 06:04:59.564625 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732145419a651340a2069afe93f6d0f5fc456ef6db00454e875c6ffad12bceec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.565424 kubelet[2683]: E0905 06:04:59.564731 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732145419a651340a2069afe93f6d0f5fc456ef6db00454e875c6ffad12bceec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69fc6d58f6-zlpwf" Sep 5 06:04:59.565424 kubelet[2683]: E0905 06:04:59.564752 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732145419a651340a2069afe93f6d0f5fc456ef6db00454e875c6ffad12bceec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69fc6d58f6-zlpwf" Sep 5 06:04:59.565424 kubelet[2683]: E0905 06:04:59.564808 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69fc6d58f6-zlpwf_calico-system(e7a18b30-357b-4579-b89d-642cf5908e58)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69fc6d58f6-zlpwf_calico-system(e7a18b30-357b-4579-b89d-642cf5908e58)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"732145419a651340a2069afe93f6d0f5fc456ef6db00454e875c6ffad12bceec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69fc6d58f6-zlpwf" podUID="e7a18b30-357b-4579-b89d-642cf5908e58" Sep 5 06:04:59.565693 containerd[1527]: time="2025-09-05T06:04:59.565236100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cx2nl,Uid:18fc31d9-056e-48eb-b2ea-0cfe91c332c1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7f88949afd7df5aef11bcc4a0531feecf4d1e2c6014288692272e8bd44aeb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.565801 kubelet[2683]: E0905 06:04:59.565520 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7f88949afd7df5aef11bcc4a0531feecf4d1e2c6014288692272e8bd44aeb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.565801 kubelet[2683]: E0905 06:04:59.565587 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7f88949afd7df5aef11bcc4a0531feecf4d1e2c6014288692272e8bd44aeb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cx2nl" Sep 5 06:04:59.565801 kubelet[2683]: E0905 06:04:59.565606 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e7f88949afd7df5aef11bcc4a0531feecf4d1e2c6014288692272e8bd44aeb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-cx2nl" Sep 5 06:04:59.567300 kubelet[2683]: E0905 06:04:59.565663 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-cx2nl_kube-system(18fc31d9-056e-48eb-b2ea-0cfe91c332c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-cx2nl_kube-system(18fc31d9-056e-48eb-b2ea-0cfe91c332c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e7f88949afd7df5aef11bcc4a0531feecf4d1e2c6014288692272e8bd44aeb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-cx2nl" podUID="18fc31d9-056e-48eb-b2ea-0cfe91c332c1" Sep 5 06:04:59.576013 containerd[1527]: time="2025-09-05T06:04:59.575971032Z" level=error msg="Failed to destroy network for sandbox \"634b7bf0d323aa87993f1483e79ac9e5cf10606877e1a2aa539b839e571057d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.576861 containerd[1527]: time="2025-09-05T06:04:59.576825553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55995d5b69-tb6ks,Uid:bcaaad9e-3cf8-479b-bef2-88f1eddc24b8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"634b7bf0d323aa87993f1483e79ac9e5cf10606877e1a2aa539b839e571057d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.577259 kubelet[2683]: E0905 06:04:59.577052 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"634b7bf0d323aa87993f1483e79ac9e5cf10606877e1a2aa539b839e571057d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.577302 kubelet[2683]: E0905 06:04:59.577286 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"634b7bf0d323aa87993f1483e79ac9e5cf10606877e1a2aa539b839e571057d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55995d5b69-tb6ks" Sep 5 06:04:59.577359 kubelet[2683]: E0905 06:04:59.577309 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"634b7bf0d323aa87993f1483e79ac9e5cf10606877e1a2aa539b839e571057d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-55995d5b69-tb6ks" Sep 5 06:04:59.577429 kubelet[2683]: E0905 06:04:59.577357 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-55995d5b69-tb6ks_calico-system(bcaaad9e-3cf8-479b-bef2-88f1eddc24b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-55995d5b69-tb6ks_calico-system(bcaaad9e-3cf8-479b-bef2-88f1eddc24b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"634b7bf0d323aa87993f1483e79ac9e5cf10606877e1a2aa539b839e571057d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-55995d5b69-tb6ks" podUID="bcaaad9e-3cf8-479b-bef2-88f1eddc24b8" Sep 5 06:04:59.582216 containerd[1527]: time="2025-09-05T06:04:59.582099159Z" level=error msg="Failed to destroy network for sandbox \"fa368746f5811c421fd0e8d85049b749051755cd242cc17ad3928467f562d821\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.583286 containerd[1527]: time="2025-09-05T06:04:59.583249881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fftn2,Uid:0c507666-50ba-406b-b8d4-d7774556e6e3,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa368746f5811c421fd0e8d85049b749051755cd242cc17ad3928467f562d821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.583638 kubelet[2683]: E0905 06:04:59.583581 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa368746f5811c421fd0e8d85049b749051755cd242cc17ad3928467f562d821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.583708 kubelet[2683]: E0905 06:04:59.583654 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa368746f5811c421fd0e8d85049b749051755cd242cc17ad3928467f562d821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fftn2" Sep 5 06:04:59.583708 kubelet[2683]: E0905 06:04:59.583674 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa368746f5811c421fd0e8d85049b749051755cd242cc17ad3928467f562d821\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fftn2" Sep 5 06:04:59.583879 kubelet[2683]: E0905 06:04:59.583721 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fftn2_kube-system(0c507666-50ba-406b-b8d4-d7774556e6e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fftn2_kube-system(0c507666-50ba-406b-b8d4-d7774556e6e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa368746f5811c421fd0e8d85049b749051755cd242cc17ad3928467f562d821\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fftn2" podUID="0c507666-50ba-406b-b8d4-d7774556e6e3" Sep 5 06:04:59.592750 containerd[1527]: time="2025-09-05T06:04:59.592696292Z" level=error msg="Failed to destroy network for sandbox \"b8b2052b425574424100cc4bc5b95158f544498dd9d190f650e88c8abd549ba2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.592889 containerd[1527]: time="2025-09-05T06:04:59.592721772Z" level=error msg="Failed to destroy network for sandbox \"3c8af47d4ecc885cf8987ee4b081671a8b1021a6fd47bf9c8f7b0dd62759a6f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.593748 containerd[1527]: time="2025-09-05T06:04:59.593700973Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968c94f79-l6q6t,Uid:2015faee-c21b-4fed-b369-661fd91c6266,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b2052b425574424100cc4bc5b95158f544498dd9d190f650e88c8abd549ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.593974 kubelet[2683]: E0905 06:04:59.593936 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b2052b425574424100cc4bc5b95158f544498dd9d190f650e88c8abd549ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.594028 kubelet[2683]: E0905 06:04:59.593996 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b2052b425574424100cc4bc5b95158f544498dd9d190f650e88c8abd549ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7968c94f79-l6q6t" Sep 5 06:04:59.594028 kubelet[2683]: E0905 06:04:59.594019 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8b2052b425574424100cc4bc5b95158f544498dd9d190f650e88c8abd549ba2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7968c94f79-l6q6t" Sep 5 06:04:59.594090 kubelet[2683]: E0905 06:04:59.594062 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7968c94f79-l6q6t_calico-apiserver(2015faee-c21b-4fed-b369-661fd91c6266)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7968c94f79-l6q6t_calico-apiserver(2015faee-c21b-4fed-b369-661fd91c6266)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8b2052b425574424100cc4bc5b95158f544498dd9d190f650e88c8abd549ba2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7968c94f79-l6q6t" podUID="2015faee-c21b-4fed-b369-661fd91c6266" Sep 5 06:04:59.594473 containerd[1527]: time="2025-09-05T06:04:59.594437574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968c94f79-7db7j,Uid:c27cfba9-9f26-49f2-945a-fd21b34afb17,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8af47d4ecc885cf8987ee4b081671a8b1021a6fd47bf9c8f7b0dd62759a6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.594731 containerd[1527]: time="2025-09-05T06:04:59.594682814Z" level=error msg="Failed to destroy network for sandbox \"b8677feaf36d4f63890ad32a54e7a098cbd3386db6ed33f12a00d337de92cd63\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.594867 kubelet[2683]: E0905 06:04:59.594839 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8af47d4ecc885cf8987ee4b081671a8b1021a6fd47bf9c8f7b0dd62759a6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.595275 kubelet[2683]: E0905 06:04:59.595240 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8af47d4ecc885cf8987ee4b081671a8b1021a6fd47bf9c8f7b0dd62759a6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7968c94f79-7db7j" Sep 5 06:04:59.595344 kubelet[2683]: E0905 06:04:59.595280 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c8af47d4ecc885cf8987ee4b081671a8b1021a6fd47bf9c8f7b0dd62759a6f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7968c94f79-7db7j" Sep 5 06:04:59.595344 kubelet[2683]: E0905 06:04:59.595321 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7968c94f79-7db7j_calico-apiserver(c27cfba9-9f26-49f2-945a-fd21b34afb17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7968c94f79-7db7j_calico-apiserver(c27cfba9-9f26-49f2-945a-fd21b34afb17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c8af47d4ecc885cf8987ee4b081671a8b1021a6fd47bf9c8f7b0dd62759a6f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7968c94f79-7db7j" podUID="c27cfba9-9f26-49f2-945a-fd21b34afb17" Sep 5 06:04:59.595568 containerd[1527]: time="2025-09-05T06:04:59.595531015Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bj7r9,Uid:e4905f2f-9ed5-4229-9af0-720a9e15f973,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8677feaf36d4f63890ad32a54e7a098cbd3386db6ed33f12a00d337de92cd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.596001 kubelet[2683]: E0905 06:04:59.595963 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8677feaf36d4f63890ad32a54e7a098cbd3386db6ed33f12a00d337de92cd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:04:59.596041 kubelet[2683]: E0905 06:04:59.596014 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8677feaf36d4f63890ad32a54e7a098cbd3386db6ed33f12a00d337de92cd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-bj7r9" Sep 5 06:04:59.596041 kubelet[2683]: E0905 06:04:59.596032 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b8677feaf36d4f63890ad32a54e7a098cbd3386db6ed33f12a00d337de92cd63\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-bj7r9" Sep 5 06:04:59.596124 kubelet[2683]: E0905 06:04:59.596070 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-bj7r9_calico-system(e4905f2f-9ed5-4229-9af0-720a9e15f973)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-bj7r9_calico-system(e4905f2f-9ed5-4229-9af0-720a9e15f973)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b8677feaf36d4f63890ad32a54e7a098cbd3386db6ed33f12a00d337de92cd63\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-bj7r9" podUID="e4905f2f-9ed5-4229-9af0-720a9e15f973" Sep 5 06:05:00.379069 systemd[1]: Created slice kubepods-besteffort-pod4cde16ae_a1c6_4825_89e0_2698e4deec05.slice - libcontainer container kubepods-besteffort-pod4cde16ae_a1c6_4825_89e0_2698e4deec05.slice. Sep 5 06:05:00.381482 containerd[1527]: time="2025-09-05T06:05:00.381444409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kl9t2,Uid:4cde16ae-a1c6-4825-89e0-2698e4deec05,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:00.426359 containerd[1527]: time="2025-09-05T06:05:00.426307617Z" level=error msg="Failed to destroy network for sandbox \"fa2309b7129254945356abdc73e1e3b19f22cd24cc8bb92d3599ad276a6e9400\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:00.427458 containerd[1527]: time="2025-09-05T06:05:00.427404818Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kl9t2,Uid:4cde16ae-a1c6-4825-89e0-2698e4deec05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa2309b7129254945356abdc73e1e3b19f22cd24cc8bb92d3599ad276a6e9400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:00.427712 kubelet[2683]: E0905 06:05:00.427670 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa2309b7129254945356abdc73e1e3b19f22cd24cc8bb92d3599ad276a6e9400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 5 06:05:00.427768 kubelet[2683]: E0905 06:05:00.427733 2683 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa2309b7129254945356abdc73e1e3b19f22cd24cc8bb92d3599ad276a6e9400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kl9t2" Sep 5 06:05:00.427824 kubelet[2683]: E0905 06:05:00.427753 2683 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa2309b7129254945356abdc73e1e3b19f22cd24cc8bb92d3599ad276a6e9400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-kl9t2" Sep 5 06:05:00.427850 kubelet[2683]: E0905 06:05:00.427820 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-kl9t2_calico-system(4cde16ae-a1c6-4825-89e0-2698e4deec05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-kl9t2_calico-system(4cde16ae-a1c6-4825-89e0-2698e4deec05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa2309b7129254945356abdc73e1e3b19f22cd24cc8bb92d3599ad276a6e9400\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-kl9t2" podUID="4cde16ae-a1c6-4825-89e0-2698e4deec05" Sep 5 06:05:00.428941 systemd[1]: run-netns-cni\x2d103384aa\x2d81f7\x2d07ac\x2d84c6\x2da2edbbe97d62.mount: Deactivated successfully. Sep 5 06:05:03.387159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4146188088.mount: Deactivated successfully. Sep 5 06:05:03.616136 containerd[1527]: time="2025-09-05T06:05:03.616083532Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:03.616945 containerd[1527]: time="2025-09-05T06:05:03.616818212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=151100457" Sep 5 06:05:03.617719 containerd[1527]: time="2025-09-05T06:05:03.617682453Z" level=info msg="ImageCreate event name:\"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:03.619777 containerd[1527]: time="2025-09-05T06:05:03.619734215Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:03.620631 containerd[1527]: time="2025-09-05T06:05:03.620242455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"151100319\" in 4.168061125s" Sep 5 06:05:03.620631 containerd[1527]: time="2025-09-05T06:05:03.620273615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:2b8abd2140fc4464ed664d225fe38e5b90bbfcf62996b484b0fc0e0537b6a4a9\"" Sep 5 06:05:03.645291 containerd[1527]: time="2025-09-05T06:05:03.645195237Z" level=info msg="CreateContainer within sandbox \"c77aa8228a52ed92fddf0520ef0225c3876c8f2287e2108e6d56051be34a1ee4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 5 06:05:03.660430 containerd[1527]: time="2025-09-05T06:05:03.659508130Z" level=info msg="Container 6835738f65081a8bfa44a3755973a20af1ed5c278135981c4ba027e25af29196: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:03.668603 containerd[1527]: time="2025-09-05T06:05:03.668546538Z" level=info msg="CreateContainer within sandbox \"c77aa8228a52ed92fddf0520ef0225c3876c8f2287e2108e6d56051be34a1ee4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6835738f65081a8bfa44a3755973a20af1ed5c278135981c4ba027e25af29196\"" Sep 5 06:05:03.669200 containerd[1527]: time="2025-09-05T06:05:03.669161059Z" level=info msg="StartContainer for \"6835738f65081a8bfa44a3755973a20af1ed5c278135981c4ba027e25af29196\"" Sep 5 06:05:03.673480 containerd[1527]: time="2025-09-05T06:05:03.673449782Z" level=info msg="connecting to shim 6835738f65081a8bfa44a3755973a20af1ed5c278135981c4ba027e25af29196" address="unix:///run/containerd/s/b67bbf7b7df39cdf4c0d9fa2a581ee08afd6a0862a22cc86aceb5178b88876cd" protocol=ttrpc version=3 Sep 5 06:05:03.694532 systemd[1]: Started cri-containerd-6835738f65081a8bfa44a3755973a20af1ed5c278135981c4ba027e25af29196.scope - libcontainer container 6835738f65081a8bfa44a3755973a20af1ed5c278135981c4ba027e25af29196. Sep 5 06:05:03.744798 containerd[1527]: time="2025-09-05T06:05:03.744751086Z" level=info msg="StartContainer for \"6835738f65081a8bfa44a3755973a20af1ed5c278135981c4ba027e25af29196\" returns successfully" Sep 5 06:05:03.856070 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 5 06:05:03.856165 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 5 06:05:03.969113 kubelet[2683]: I0905 06:05:03.969072 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7a18b30-357b-4579-b89d-642cf5908e58-whisker-backend-key-pair\") pod \"e7a18b30-357b-4579-b89d-642cf5908e58\" (UID: \"e7a18b30-357b-4579-b89d-642cf5908e58\") " Sep 5 06:05:03.969113 kubelet[2683]: I0905 06:05:03.969114 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7a18b30-357b-4579-b89d-642cf5908e58-whisker-ca-bundle\") pod \"e7a18b30-357b-4579-b89d-642cf5908e58\" (UID: \"e7a18b30-357b-4579-b89d-642cf5908e58\") " Sep 5 06:05:03.969513 kubelet[2683]: I0905 06:05:03.969132 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66j4z\" (UniqueName: \"kubernetes.io/projected/e7a18b30-357b-4579-b89d-642cf5908e58-kube-api-access-66j4z\") pod \"e7a18b30-357b-4579-b89d-642cf5908e58\" (UID: \"e7a18b30-357b-4579-b89d-642cf5908e58\") " Sep 5 06:05:03.969927 kubelet[2683]: I0905 06:05:03.969875 2683 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7a18b30-357b-4579-b89d-642cf5908e58-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e7a18b30-357b-4579-b89d-642cf5908e58" (UID: "e7a18b30-357b-4579-b89d-642cf5908e58"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 5 06:05:03.974493 kubelet[2683]: I0905 06:05:03.974458 2683 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7a18b30-357b-4579-b89d-642cf5908e58-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e7a18b30-357b-4579-b89d-642cf5908e58" (UID: "e7a18b30-357b-4579-b89d-642cf5908e58"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 5 06:05:03.975679 kubelet[2683]: I0905 06:05:03.975586 2683 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7a18b30-357b-4579-b89d-642cf5908e58-kube-api-access-66j4z" (OuterVolumeSpecName: "kube-api-access-66j4z") pod "e7a18b30-357b-4579-b89d-642cf5908e58" (UID: "e7a18b30-357b-4579-b89d-642cf5908e58"). InnerVolumeSpecName "kube-api-access-66j4z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 5 06:05:04.070355 kubelet[2683]: I0905 06:05:04.069964 2683 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e7a18b30-357b-4579-b89d-642cf5908e58-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 5 06:05:04.070355 kubelet[2683]: I0905 06:05:04.069998 2683 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e7a18b30-357b-4579-b89d-642cf5908e58-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 5 06:05:04.078556 kubelet[2683]: I0905 06:05:04.078522 2683 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-66j4z\" (UniqueName: \"kubernetes.io/projected/e7a18b30-357b-4579-b89d-642cf5908e58-kube-api-access-66j4z\") on node \"localhost\" DevicePath \"\"" Sep 5 06:05:04.388078 systemd[1]: var-lib-kubelet-pods-e7a18b30\x2d357b\x2d4579\x2db89d\x2d642cf5908e58-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d66j4z.mount: Deactivated successfully. Sep 5 06:05:04.388185 systemd[1]: var-lib-kubelet-pods-e7a18b30\x2d357b\x2d4579\x2db89d\x2d642cf5908e58-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 5 06:05:04.466546 systemd[1]: Removed slice kubepods-besteffort-pode7a18b30_357b_4579_b89d_642cf5908e58.slice - libcontainer container kubepods-besteffort-pode7a18b30_357b_4579_b89d_642cf5908e58.slice. Sep 5 06:05:04.476346 kubelet[2683]: I0905 06:05:04.476274 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-766ss" podStartSLOduration=2.018905883 podStartE2EDuration="13.476149587s" podCreationTimestamp="2025-09-05 06:04:51 +0000 UTC" firstStartedPulling="2025-09-05 06:04:52.166117314 +0000 UTC m=+20.892502935" lastFinishedPulling="2025-09-05 06:05:03.623360978 +0000 UTC m=+32.349746639" observedRunningTime="2025-09-05 06:05:04.475356866 +0000 UTC m=+33.201742527" watchObservedRunningTime="2025-09-05 06:05:04.476149587 +0000 UTC m=+33.202535288" Sep 5 06:05:04.536683 systemd[1]: Created slice kubepods-besteffort-pod9eca3c79_b141_469d_8828_6fd2bd1cac49.slice - libcontainer container kubepods-besteffort-pod9eca3c79_b141_469d_8828_6fd2bd1cac49.slice. Sep 5 06:05:04.582025 kubelet[2683]: I0905 06:05:04.581980 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9eca3c79-b141-469d-8828-6fd2bd1cac49-whisker-ca-bundle\") pod \"whisker-79ccf6698c-jth9n\" (UID: \"9eca3c79-b141-469d-8828-6fd2bd1cac49\") " pod="calico-system/whisker-79ccf6698c-jth9n" Sep 5 06:05:04.582137 kubelet[2683]: I0905 06:05:04.582048 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9eca3c79-b141-469d-8828-6fd2bd1cac49-whisker-backend-key-pair\") pod \"whisker-79ccf6698c-jth9n\" (UID: \"9eca3c79-b141-469d-8828-6fd2bd1cac49\") " pod="calico-system/whisker-79ccf6698c-jth9n" Sep 5 06:05:04.582137 kubelet[2683]: I0905 06:05:04.582070 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sswq\" (UniqueName: \"kubernetes.io/projected/9eca3c79-b141-469d-8828-6fd2bd1cac49-kube-api-access-7sswq\") pod \"whisker-79ccf6698c-jth9n\" (UID: \"9eca3c79-b141-469d-8828-6fd2bd1cac49\") " pod="calico-system/whisker-79ccf6698c-jth9n" Sep 5 06:05:04.841505 containerd[1527]: time="2025-09-05T06:05:04.841454650Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79ccf6698c-jth9n,Uid:9eca3c79-b141-469d-8828-6fd2bd1cac49,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:04.987319 systemd-networkd[1436]: cali10c5d11e4f2: Link UP Sep 5 06:05:04.987563 systemd-networkd[1436]: cali10c5d11e4f2: Gained carrier Sep 5 06:05:04.999890 containerd[1527]: 2025-09-05 06:05:04.862 [INFO][3879] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 5 06:05:04.999890 containerd[1527]: 2025-09-05 06:05:04.893 [INFO][3879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--79ccf6698c--jth9n-eth0 whisker-79ccf6698c- calico-system 9eca3c79-b141-469d-8828-6fd2bd1cac49 912 0 2025-09-05 06:05:04 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79ccf6698c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-79ccf6698c-jth9n eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali10c5d11e4f2 [] [] }} ContainerID="37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" Namespace="calico-system" Pod="whisker-79ccf6698c-jth9n" WorkloadEndpoint="localhost-k8s-whisker--79ccf6698c--jth9n-" Sep 5 06:05:04.999890 containerd[1527]: 2025-09-05 06:05:04.893 [INFO][3879] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" Namespace="calico-system" Pod="whisker-79ccf6698c-jth9n" WorkloadEndpoint="localhost-k8s-whisker--79ccf6698c--jth9n-eth0" Sep 5 06:05:04.999890 containerd[1527]: 2025-09-05 06:05:04.947 [INFO][3892] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" HandleID="k8s-pod-network.37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" Workload="localhost-k8s-whisker--79ccf6698c--jth9n-eth0" Sep 5 06:05:05.000082 containerd[1527]: 2025-09-05 06:05:04.947 [INFO][3892] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" HandleID="k8s-pod-network.37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" Workload="localhost-k8s-whisker--79ccf6698c--jth9n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c3010), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-79ccf6698c-jth9n", "timestamp":"2025-09-05 06:05:04.947363018 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:05.000082 containerd[1527]: 2025-09-05 06:05:04.947 [INFO][3892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:05.000082 containerd[1527]: 2025-09-05 06:05:04.947 [INFO][3892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:05.000082 containerd[1527]: 2025-09-05 06:05:04.947 [INFO][3892] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:05.000082 containerd[1527]: 2025-09-05 06:05:04.957 [INFO][3892] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" host="localhost" Sep 5 06:05:05.000082 containerd[1527]: 2025-09-05 06:05:04.962 [INFO][3892] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:05.000082 containerd[1527]: 2025-09-05 06:05:04.966 [INFO][3892] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:05.000082 containerd[1527]: 2025-09-05 06:05:04.967 [INFO][3892] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:05.000082 containerd[1527]: 2025-09-05 06:05:04.969 [INFO][3892] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:05.000082 containerd[1527]: 2025-09-05 06:05:04.969 [INFO][3892] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" host="localhost" Sep 5 06:05:05.000281 containerd[1527]: 2025-09-05 06:05:04.970 [INFO][3892] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915 Sep 5 06:05:05.000281 containerd[1527]: 2025-09-05 06:05:04.974 [INFO][3892] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" host="localhost" Sep 5 06:05:05.000281 containerd[1527]: 2025-09-05 06:05:04.978 [INFO][3892] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" host="localhost" Sep 5 06:05:05.000281 containerd[1527]: 2025-09-05 06:05:04.978 [INFO][3892] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" host="localhost" Sep 5 06:05:05.000281 containerd[1527]: 2025-09-05 06:05:04.978 [INFO][3892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:05.000281 containerd[1527]: 2025-09-05 06:05:04.978 [INFO][3892] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" HandleID="k8s-pod-network.37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" Workload="localhost-k8s-whisker--79ccf6698c--jth9n-eth0" Sep 5 06:05:05.000398 containerd[1527]: 2025-09-05 06:05:04.981 [INFO][3879] cni-plugin/k8s.go 418: Populated endpoint ContainerID="37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" Namespace="calico-system" Pod="whisker-79ccf6698c-jth9n" WorkloadEndpoint="localhost-k8s-whisker--79ccf6698c--jth9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79ccf6698c--jth9n-eth0", GenerateName:"whisker-79ccf6698c-", Namespace:"calico-system", SelfLink:"", UID:"9eca3c79-b141-469d-8828-6fd2bd1cac49", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79ccf6698c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-79ccf6698c-jth9n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali10c5d11e4f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:05.000398 containerd[1527]: 2025-09-05 06:05:04.981 [INFO][3879] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" Namespace="calico-system" Pod="whisker-79ccf6698c-jth9n" WorkloadEndpoint="localhost-k8s-whisker--79ccf6698c--jth9n-eth0" Sep 5 06:05:05.000483 containerd[1527]: 2025-09-05 06:05:04.981 [INFO][3879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10c5d11e4f2 ContainerID="37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" Namespace="calico-system" Pod="whisker-79ccf6698c-jth9n" WorkloadEndpoint="localhost-k8s-whisker--79ccf6698c--jth9n-eth0" Sep 5 06:05:05.000483 containerd[1527]: 2025-09-05 06:05:04.987 [INFO][3879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" Namespace="calico-system" Pod="whisker-79ccf6698c-jth9n" WorkloadEndpoint="localhost-k8s-whisker--79ccf6698c--jth9n-eth0" Sep 5 06:05:05.000522 containerd[1527]: 2025-09-05 06:05:04.988 [INFO][3879] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" Namespace="calico-system" Pod="whisker-79ccf6698c-jth9n" WorkloadEndpoint="localhost-k8s-whisker--79ccf6698c--jth9n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--79ccf6698c--jth9n-eth0", GenerateName:"whisker-79ccf6698c-", Namespace:"calico-system", SelfLink:"", UID:"9eca3c79-b141-469d-8828-6fd2bd1cac49", ResourceVersion:"912", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 5, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79ccf6698c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915", Pod:"whisker-79ccf6698c-jth9n", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali10c5d11e4f2", MAC:"ae:0b:88:c3:ce:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:05.000580 containerd[1527]: 2025-09-05 06:05:04.998 [INFO][3879] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" Namespace="calico-system" Pod="whisker-79ccf6698c-jth9n" WorkloadEndpoint="localhost-k8s-whisker--79ccf6698c--jth9n-eth0" Sep 5 06:05:05.038708 containerd[1527]: time="2025-09-05T06:05:05.038498292Z" level=info msg="connecting to shim 37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915" address="unix:///run/containerd/s/40d36e7d91168d7a750b8ce90cb35cb238aa43bacfad879882f960917b05cdd3" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:05.062588 systemd[1]: Started cri-containerd-37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915.scope - libcontainer container 37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915. Sep 5 06:05:05.081014 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:05.100501 containerd[1527]: time="2025-09-05T06:05:05.100340420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79ccf6698c-jth9n,Uid:9eca3c79-b141-469d-8828-6fd2bd1cac49,Namespace:calico-system,Attempt:0,} returns sandbox id \"37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915\"" Sep 5 06:05:05.103673 containerd[1527]: time="2025-09-05T06:05:05.103534103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 5 06:05:05.372725 kubelet[2683]: I0905 06:05:05.372615 2683 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7a18b30-357b-4579-b89d-642cf5908e58" path="/var/lib/kubelet/pods/e7a18b30-357b-4579-b89d-642cf5908e58/volumes" Sep 5 06:05:05.464359 kubelet[2683]: I0905 06:05:05.464314 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:05:06.572544 systemd-networkd[1436]: cali10c5d11e4f2: Gained IPv6LL Sep 5 06:05:06.777340 kubelet[2683]: I0905 06:05:06.777295 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:05:06.926744 containerd[1527]: time="2025-09-05T06:05:06.912184346Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4605606" Sep 5 06:05:06.926744 containerd[1527]: time="2025-09-05T06:05:06.915408229Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"5974839\" in 1.811832686s" Sep 5 06:05:06.926744 containerd[1527]: time="2025-09-05T06:05:06.926679117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\"" Sep 5 06:05:06.926744 containerd[1527]: time="2025-09-05T06:05:06.921546153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:06.927357 containerd[1527]: time="2025-09-05T06:05:06.927334557Z" level=info msg="ImageCreate event name:\"sha256:270a0129ec34c3ad6ae6d56c0afce111eb0baa25dfdacb63722ec5887bafd3c5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:06.928080 containerd[1527]: time="2025-09-05T06:05:06.928054518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:06.931956 containerd[1527]: time="2025-09-05T06:05:06.931918841Z" level=info msg="CreateContainer within sandbox \"37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 5 06:05:06.939249 containerd[1527]: time="2025-09-05T06:05:06.939211686Z" level=info msg="Container fe67ce0a1af1975a49781e88c4012bd2b64e483de1829cd857f80eb426bbf27d: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:06.957343 containerd[1527]: time="2025-09-05T06:05:06.957293179Z" level=info msg="CreateContainer within sandbox \"37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"fe67ce0a1af1975a49781e88c4012bd2b64e483de1829cd857f80eb426bbf27d\"" Sep 5 06:05:06.957765 containerd[1527]: time="2025-09-05T06:05:06.957743899Z" level=info msg="StartContainer for \"fe67ce0a1af1975a49781e88c4012bd2b64e483de1829cd857f80eb426bbf27d\"" Sep 5 06:05:06.958948 containerd[1527]: time="2025-09-05T06:05:06.958918060Z" level=info msg="connecting to shim fe67ce0a1af1975a49781e88c4012bd2b64e483de1829cd857f80eb426bbf27d" address="unix:///run/containerd/s/40d36e7d91168d7a750b8ce90cb35cb238aa43bacfad879882f960917b05cdd3" protocol=ttrpc version=3 Sep 5 06:05:06.980517 systemd[1]: Started cri-containerd-fe67ce0a1af1975a49781e88c4012bd2b64e483de1829cd857f80eb426bbf27d.scope - libcontainer container fe67ce0a1af1975a49781e88c4012bd2b64e483de1829cd857f80eb426bbf27d. Sep 5 06:05:07.022860 containerd[1527]: time="2025-09-05T06:05:07.022823706Z" level=info msg="StartContainer for \"fe67ce0a1af1975a49781e88c4012bd2b64e483de1829cd857f80eb426bbf27d\" returns successfully" Sep 5 06:05:07.024443 containerd[1527]: time="2025-09-05T06:05:07.024358467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 5 06:05:07.654972 systemd-networkd[1436]: vxlan.calico: Link UP Sep 5 06:05:07.654981 systemd-networkd[1436]: vxlan.calico: Gained carrier Sep 5 06:05:08.575524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1493317424.mount: Deactivated successfully. Sep 5 06:05:08.590571 containerd[1527]: time="2025-09-05T06:05:08.590512033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:08.591045 containerd[1527]: time="2025-09-05T06:05:08.590996794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=30823700" Sep 5 06:05:08.591997 containerd[1527]: time="2025-09-05T06:05:08.591960594Z" level=info msg="ImageCreate event name:\"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:08.593941 containerd[1527]: time="2025-09-05T06:05:08.593898915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:08.595397 containerd[1527]: time="2025-09-05T06:05:08.595349036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"30823530\" in 1.570903089s" Sep 5 06:05:08.595455 containerd[1527]: time="2025-09-05T06:05:08.595397996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:e210e86234bc99f018431b30477c5ca2ad6f7ecf67ef011498f7beb48fb0b21f\"" Sep 5 06:05:08.612581 containerd[1527]: time="2025-09-05T06:05:08.612542927Z" level=info msg="CreateContainer within sandbox \"37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 5 06:05:08.621573 containerd[1527]: time="2025-09-05T06:05:08.621517853Z" level=info msg="Container 83662f0a4c3a49349d748a7f1b2935e345f417e5fbaa00d7a465f6841f94047e: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:08.623771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3008833298.mount: Deactivated successfully. Sep 5 06:05:08.629882 containerd[1527]: time="2025-09-05T06:05:08.629827898Z" level=info msg="CreateContainer within sandbox \"37f6c7dcd4bf8e64ced41e4c396468a6225adb65f8613c371e7486056c023915\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"83662f0a4c3a49349d748a7f1b2935e345f417e5fbaa00d7a465f6841f94047e\"" Sep 5 06:05:08.630452 containerd[1527]: time="2025-09-05T06:05:08.630425819Z" level=info msg="StartContainer for \"83662f0a4c3a49349d748a7f1b2935e345f417e5fbaa00d7a465f6841f94047e\"" Sep 5 06:05:08.631570 containerd[1527]: time="2025-09-05T06:05:08.631543580Z" level=info msg="connecting to shim 83662f0a4c3a49349d748a7f1b2935e345f417e5fbaa00d7a465f6841f94047e" address="unix:///run/containerd/s/40d36e7d91168d7a750b8ce90cb35cb238aa43bacfad879882f960917b05cdd3" protocol=ttrpc version=3 Sep 5 06:05:08.652637 systemd[1]: Started cri-containerd-83662f0a4c3a49349d748a7f1b2935e345f417e5fbaa00d7a465f6841f94047e.scope - libcontainer container 83662f0a4c3a49349d748a7f1b2935e345f417e5fbaa00d7a465f6841f94047e. Sep 5 06:05:08.695496 containerd[1527]: time="2025-09-05T06:05:08.695421581Z" level=info msg="StartContainer for \"83662f0a4c3a49349d748a7f1b2935e345f417e5fbaa00d7a465f6841f94047e\" returns successfully" Sep 5 06:05:09.513813 kubelet[2683]: I0905 06:05:09.513522 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-79ccf6698c-jth9n" podStartSLOduration=2.015013587 podStartE2EDuration="5.513503885s" podCreationTimestamp="2025-09-05 06:05:04 +0000 UTC" firstStartedPulling="2025-09-05 06:05:05.102958142 +0000 UTC m=+33.829343803" lastFinishedPulling="2025-09-05 06:05:08.60144844 +0000 UTC m=+37.327834101" observedRunningTime="2025-09-05 06:05:09.512961164 +0000 UTC m=+38.239346825" watchObservedRunningTime="2025-09-05 06:05:09.513503885 +0000 UTC m=+38.239889546" Sep 5 06:05:09.644600 systemd-networkd[1436]: vxlan.calico: Gained IPv6LL Sep 5 06:05:10.371448 containerd[1527]: time="2025-09-05T06:05:10.371402787Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968c94f79-l6q6t,Uid:2015faee-c21b-4fed-b369-661fd91c6266,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:05:10.479741 systemd-networkd[1436]: calic77176def74: Link UP Sep 5 06:05:10.480513 systemd-networkd[1436]: calic77176def74: Gained carrier Sep 5 06:05:10.492672 containerd[1527]: 2025-09-05 06:05:10.414 [INFO][4300] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0 calico-apiserver-7968c94f79- calico-apiserver 2015faee-c21b-4fed-b369-661fd91c6266 846 0 2025-09-05 06:04:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7968c94f79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7968c94f79-l6q6t eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic77176def74 [] [] }} ContainerID="70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-l6q6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--l6q6t-" Sep 5 06:05:10.492672 containerd[1527]: 2025-09-05 06:05:10.414 [INFO][4300] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-l6q6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0" Sep 5 06:05:10.492672 containerd[1527]: 2025-09-05 06:05:10.439 [INFO][4315] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" HandleID="k8s-pod-network.70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" Workload="localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0" Sep 5 06:05:10.492844 containerd[1527]: 2025-09-05 06:05:10.439 [INFO][4315] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" HandleID="k8s-pod-network.70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" Workload="localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001373c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7968c94f79-l6q6t", "timestamp":"2025-09-05 06:05:10.439113745 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:10.492844 containerd[1527]: 2025-09-05 06:05:10.439 [INFO][4315] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:10.492844 containerd[1527]: 2025-09-05 06:05:10.439 [INFO][4315] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:10.492844 containerd[1527]: 2025-09-05 06:05:10.439 [INFO][4315] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:10.492844 containerd[1527]: 2025-09-05 06:05:10.449 [INFO][4315] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" host="localhost" Sep 5 06:05:10.492844 containerd[1527]: 2025-09-05 06:05:10.454 [INFO][4315] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:10.492844 containerd[1527]: 2025-09-05 06:05:10.458 [INFO][4315] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:10.492844 containerd[1527]: 2025-09-05 06:05:10.460 [INFO][4315] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:10.492844 containerd[1527]: 2025-09-05 06:05:10.462 [INFO][4315] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:10.492844 containerd[1527]: 2025-09-05 06:05:10.462 [INFO][4315] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" host="localhost" Sep 5 06:05:10.493035 containerd[1527]: 2025-09-05 06:05:10.464 [INFO][4315] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f Sep 5 06:05:10.493035 containerd[1527]: 2025-09-05 06:05:10.469 [INFO][4315] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" host="localhost" Sep 5 06:05:10.493035 containerd[1527]: 2025-09-05 06:05:10.475 [INFO][4315] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" host="localhost" Sep 5 06:05:10.493035 containerd[1527]: 2025-09-05 06:05:10.475 [INFO][4315] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" host="localhost" Sep 5 06:05:10.493035 containerd[1527]: 2025-09-05 06:05:10.475 [INFO][4315] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:10.493035 containerd[1527]: 2025-09-05 06:05:10.475 [INFO][4315] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" HandleID="k8s-pod-network.70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" Workload="localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0" Sep 5 06:05:10.493171 containerd[1527]: 2025-09-05 06:05:10.477 [INFO][4300] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-l6q6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0", GenerateName:"calico-apiserver-7968c94f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"2015faee-c21b-4fed-b369-661fd91c6266", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7968c94f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7968c94f79-l6q6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic77176def74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:10.493220 containerd[1527]: 2025-09-05 06:05:10.477 [INFO][4300] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-l6q6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0" Sep 5 06:05:10.493220 containerd[1527]: 2025-09-05 06:05:10.477 [INFO][4300] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic77176def74 ContainerID="70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-l6q6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0" Sep 5 06:05:10.493220 containerd[1527]: 2025-09-05 06:05:10.480 [INFO][4300] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-l6q6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0" Sep 5 06:05:10.493277 containerd[1527]: 2025-09-05 06:05:10.480 [INFO][4300] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-l6q6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0", GenerateName:"calico-apiserver-7968c94f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"2015faee-c21b-4fed-b369-661fd91c6266", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7968c94f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f", Pod:"calico-apiserver-7968c94f79-l6q6t", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic77176def74", MAC:"fa:d1:14:0d:8c:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:10.493325 containerd[1527]: 2025-09-05 06:05:10.488 [INFO][4300] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-l6q6t" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--l6q6t-eth0" Sep 5 06:05:10.517239 containerd[1527]: time="2025-09-05T06:05:10.517187589Z" level=info msg="connecting to shim 70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f" address="unix:///run/containerd/s/c315fd166f24597a74b6e25579345d1a1e2b9fc3cb4c019cf7f862cad563d0c2" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:10.547548 systemd[1]: Started cri-containerd-70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f.scope - libcontainer container 70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f. Sep 5 06:05:10.557217 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:10.575035 containerd[1527]: time="2025-09-05T06:05:10.574986421Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968c94f79-l6q6t,Uid:2015faee-c21b-4fed-b369-661fd91c6266,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f\"" Sep 5 06:05:10.576557 containerd[1527]: time="2025-09-05T06:05:10.576528582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 5 06:05:11.371678 containerd[1527]: time="2025-09-05T06:05:11.371634137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fftn2,Uid:0c507666-50ba-406b-b8d4-d7774556e6e3,Namespace:kube-system,Attempt:0,}" Sep 5 06:05:11.372053 containerd[1527]: time="2025-09-05T06:05:11.371656857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bj7r9,Uid:e4905f2f-9ed5-4229-9af0-720a9e15f973,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:11.481480 systemd-networkd[1436]: calieeb88d71e40: Link UP Sep 5 06:05:11.483119 systemd-networkd[1436]: calieeb88d71e40: Gained carrier Sep 5 06:05:11.499628 containerd[1527]: 2025-09-05 06:05:11.418 [INFO][4382] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--fftn2-eth0 coredns-674b8bbfcf- kube-system 0c507666-50ba-406b-b8d4-d7774556e6e3 840 0 2025-09-05 06:04:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-fftn2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieeb88d71e40 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fftn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fftn2-" Sep 5 06:05:11.499628 containerd[1527]: 2025-09-05 06:05:11.418 [INFO][4382] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fftn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fftn2-eth0" Sep 5 06:05:11.499628 containerd[1527]: 2025-09-05 06:05:11.441 [INFO][4412] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" HandleID="k8s-pod-network.987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" Workload="localhost-k8s-coredns--674b8bbfcf--fftn2-eth0" Sep 5 06:05:11.499806 containerd[1527]: 2025-09-05 06:05:11.441 [INFO][4412] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" HandleID="k8s-pod-network.987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" Workload="localhost-k8s-coredns--674b8bbfcf--fftn2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003ac340), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-fftn2", "timestamp":"2025-09-05 06:05:11.441463014 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:11.499806 containerd[1527]: 2025-09-05 06:05:11.441 [INFO][4412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:11.499806 containerd[1527]: 2025-09-05 06:05:11.441 [INFO][4412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:11.499806 containerd[1527]: 2025-09-05 06:05:11.441 [INFO][4412] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:11.499806 containerd[1527]: 2025-09-05 06:05:11.454 [INFO][4412] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" host="localhost" Sep 5 06:05:11.499806 containerd[1527]: 2025-09-05 06:05:11.457 [INFO][4412] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:11.499806 containerd[1527]: 2025-09-05 06:05:11.461 [INFO][4412] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:11.499806 containerd[1527]: 2025-09-05 06:05:11.463 [INFO][4412] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:11.499806 containerd[1527]: 2025-09-05 06:05:11.465 [INFO][4412] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:11.499806 containerd[1527]: 2025-09-05 06:05:11.465 [INFO][4412] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" host="localhost" Sep 5 06:05:11.499998 containerd[1527]: 2025-09-05 06:05:11.467 [INFO][4412] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7 Sep 5 06:05:11.499998 containerd[1527]: 2025-09-05 06:05:11.470 [INFO][4412] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" host="localhost" Sep 5 06:05:11.499998 containerd[1527]: 2025-09-05 06:05:11.474 [INFO][4412] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" host="localhost" Sep 5 06:05:11.499998 containerd[1527]: 2025-09-05 06:05:11.474 [INFO][4412] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" host="localhost" Sep 5 06:05:11.499998 containerd[1527]: 2025-09-05 06:05:11.474 [INFO][4412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:11.499998 containerd[1527]: 2025-09-05 06:05:11.474 [INFO][4412] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" HandleID="k8s-pod-network.987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" Workload="localhost-k8s-coredns--674b8bbfcf--fftn2-eth0" Sep 5 06:05:11.500114 containerd[1527]: 2025-09-05 06:05:11.479 [INFO][4382] cni-plugin/k8s.go 418: Populated endpoint ContainerID="987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fftn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fftn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fftn2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0c507666-50ba-406b-b8d4-d7774556e6e3", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-fftn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieeb88d71e40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:11.500189 containerd[1527]: 2025-09-05 06:05:11.479 [INFO][4382] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fftn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fftn2-eth0" Sep 5 06:05:11.500189 containerd[1527]: 2025-09-05 06:05:11.479 [INFO][4382] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieeb88d71e40 ContainerID="987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fftn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fftn2-eth0" Sep 5 06:05:11.500189 containerd[1527]: 2025-09-05 06:05:11.482 [INFO][4382] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fftn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fftn2-eth0" Sep 5 06:05:11.500249 containerd[1527]: 2025-09-05 06:05:11.483 [INFO][4382] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fftn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fftn2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--fftn2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"0c507666-50ba-406b-b8d4-d7774556e6e3", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7", Pod:"coredns-674b8bbfcf-fftn2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieeb88d71e40", MAC:"66:65:27:69:48:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:11.500249 containerd[1527]: 2025-09-05 06:05:11.493 [INFO][4382] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" Namespace="kube-system" Pod="coredns-674b8bbfcf-fftn2" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--fftn2-eth0" Sep 5 06:05:11.521178 systemd[1]: Started sshd@7-10.0.0.144:22-10.0.0.1:53284.service - OpenSSH per-connection server daemon (10.0.0.1:53284). Sep 5 06:05:11.547924 containerd[1527]: time="2025-09-05T06:05:11.547888111Z" level=info msg="connecting to shim 987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7" address="unix:///run/containerd/s/24e5c587b74445619cbb945500be35e6e4804157c51369877ac373dd70a78135" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:11.576560 systemd[1]: Started cri-containerd-987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7.scope - libcontainer container 987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7. Sep 5 06:05:11.601742 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 53284 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:11.602920 systemd-networkd[1436]: cali904e9f7c96f: Link UP Sep 5 06:05:11.604237 systemd-networkd[1436]: cali904e9f7c96f: Gained carrier Sep 5 06:05:11.604922 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:11.605789 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:11.610842 systemd-logind[1502]: New session 8 of user core. Sep 5 06:05:11.614521 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.420 [INFO][4394] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--bj7r9-eth0 goldmane-54d579b49d- calico-system e4905f2f-9ed5-4229-9af0-720a9e15f973 849 0 2025-09-05 06:04:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-bj7r9 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali904e9f7c96f [] [] }} ContainerID="3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" Namespace="calico-system" Pod="goldmane-54d579b49d-bj7r9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bj7r9-" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.421 [INFO][4394] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" Namespace="calico-system" Pod="goldmane-54d579b49d-bj7r9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bj7r9-eth0" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.444 [INFO][4418] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" HandleID="k8s-pod-network.3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" Workload="localhost-k8s-goldmane--54d579b49d--bj7r9-eth0" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.444 [INFO][4418] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" HandleID="k8s-pod-network.3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" Workload="localhost-k8s-goldmane--54d579b49d--bj7r9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb630), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-bj7r9", "timestamp":"2025-09-05 06:05:11.444521656 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.444 [INFO][4418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.474 [INFO][4418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.475 [INFO][4418] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.557 [INFO][4418] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" host="localhost" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.569 [INFO][4418] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.575 [INFO][4418] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.577 [INFO][4418] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.581 [INFO][4418] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.581 [INFO][4418] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" host="localhost" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.583 [INFO][4418] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.587 [INFO][4418] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" host="localhost" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.592 [INFO][4418] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" host="localhost" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.592 [INFO][4418] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" host="localhost" Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.592 [INFO][4418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:11.623798 containerd[1527]: 2025-09-05 06:05:11.593 [INFO][4418] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" HandleID="k8s-pod-network.3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" Workload="localhost-k8s-goldmane--54d579b49d--bj7r9-eth0" Sep 5 06:05:11.624261 containerd[1527]: 2025-09-05 06:05:11.599 [INFO][4394] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" Namespace="calico-system" Pod="goldmane-54d579b49d-bj7r9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bj7r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--bj7r9-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e4905f2f-9ed5-4229-9af0-720a9e15f973", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-bj7r9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali904e9f7c96f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:11.624261 containerd[1527]: 2025-09-05 06:05:11.599 [INFO][4394] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" Namespace="calico-system" Pod="goldmane-54d579b49d-bj7r9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bj7r9-eth0" Sep 5 06:05:11.624261 containerd[1527]: 2025-09-05 06:05:11.599 [INFO][4394] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali904e9f7c96f ContainerID="3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" Namespace="calico-system" Pod="goldmane-54d579b49d-bj7r9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bj7r9-eth0" Sep 5 06:05:11.624261 containerd[1527]: 2025-09-05 06:05:11.604 [INFO][4394] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" Namespace="calico-system" Pod="goldmane-54d579b49d-bj7r9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bj7r9-eth0" Sep 5 06:05:11.624261 containerd[1527]: 2025-09-05 06:05:11.606 [INFO][4394] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" Namespace="calico-system" Pod="goldmane-54d579b49d-bj7r9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bj7r9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--bj7r9-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"e4905f2f-9ed5-4229-9af0-720a9e15f973", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b", Pod:"goldmane-54d579b49d-bj7r9", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali904e9f7c96f", MAC:"f2:22:2e:b7:48:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:11.624261 containerd[1527]: 2025-09-05 06:05:11.620 [INFO][4394] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" Namespace="calico-system" Pod="goldmane-54d579b49d-bj7r9" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--bj7r9-eth0" Sep 5 06:05:11.643401 containerd[1527]: time="2025-09-05T06:05:11.641484920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fftn2,Uid:0c507666-50ba-406b-b8d4-d7774556e6e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7\"" Sep 5 06:05:11.654596 containerd[1527]: time="2025-09-05T06:05:11.654558647Z" level=info msg="CreateContainer within sandbox \"987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 06:05:11.654909 containerd[1527]: time="2025-09-05T06:05:11.654627847Z" level=info msg="connecting to shim 3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b" address="unix:///run/containerd/s/be31c01b91e6c814eeaf1a4ac0992c28c1e70d425be89fa50877e143b7d30ff5" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:11.664623 containerd[1527]: time="2025-09-05T06:05:11.664514532Z" level=info msg="Container d6175c776d741b2844181468e20e9df0bd7ad79c4dfcc1bb588c7228c1993894: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:11.674299 containerd[1527]: time="2025-09-05T06:05:11.674178337Z" level=info msg="CreateContainer within sandbox \"987251e9b3fd262f23d5a44ec961fbf06fe11d9285e31663a491ccef1b158aa7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d6175c776d741b2844181468e20e9df0bd7ad79c4dfcc1bb588c7228c1993894\"" Sep 5 06:05:11.678531 containerd[1527]: time="2025-09-05T06:05:11.678416940Z" level=info msg="StartContainer for \"d6175c776d741b2844181468e20e9df0bd7ad79c4dfcc1bb588c7228c1993894\"" Sep 5 06:05:11.680761 containerd[1527]: time="2025-09-05T06:05:11.680732981Z" level=info msg="connecting to shim d6175c776d741b2844181468e20e9df0bd7ad79c4dfcc1bb588c7228c1993894" address="unix:///run/containerd/s/24e5c587b74445619cbb945500be35e6e4804157c51369877ac373dd70a78135" protocol=ttrpc version=3 Sep 5 06:05:11.682677 systemd[1]: Started cri-containerd-3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b.scope - libcontainer container 3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b. Sep 5 06:05:11.698449 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:11.711537 systemd[1]: Started cri-containerd-d6175c776d741b2844181468e20e9df0bd7ad79c4dfcc1bb588c7228c1993894.scope - libcontainer container d6175c776d741b2844181468e20e9df0bd7ad79c4dfcc1bb588c7228c1993894. Sep 5 06:05:11.727567 containerd[1527]: time="2025-09-05T06:05:11.727534125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-bj7r9,Uid:e4905f2f-9ed5-4229-9af0-720a9e15f973,Namespace:calico-system,Attempt:0,} returns sandbox id \"3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b\"" Sep 5 06:05:11.751936 containerd[1527]: time="2025-09-05T06:05:11.751900538Z" level=info msg="StartContainer for \"d6175c776d741b2844181468e20e9df0bd7ad79c4dfcc1bb588c7228c1993894\" returns successfully" Sep 5 06:05:11.822421 systemd-networkd[1436]: calic77176def74: Gained IPv6LL Sep 5 06:05:11.924300 sshd[4484]: Connection closed by 10.0.0.1 port 53284 Sep 5 06:05:11.924630 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:11.928532 systemd[1]: sshd@7-10.0.0.144:22-10.0.0.1:53284.service: Deactivated successfully. Sep 5 06:05:11.931814 systemd[1]: session-8.scope: Deactivated successfully. Sep 5 06:05:11.933084 systemd-logind[1502]: Session 8 logged out. Waiting for processes to exit. Sep 5 06:05:11.934013 systemd-logind[1502]: Removed session 8. Sep 5 06:05:12.373211 containerd[1527]: time="2025-09-05T06:05:12.373103334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kl9t2,Uid:4cde16ae-a1c6-4825-89e0-2698e4deec05,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:12.373733 containerd[1527]: time="2025-09-05T06:05:12.373330174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55995d5b69-tb6ks,Uid:bcaaad9e-3cf8-479b-bef2-88f1eddc24b8,Namespace:calico-system,Attempt:0,}" Sep 5 06:05:12.548406 systemd-networkd[1436]: cali7cdefc162a8: Link UP Sep 5 06:05:12.549423 systemd-networkd[1436]: cali7cdefc162a8: Gained carrier Sep 5 06:05:12.563689 kubelet[2683]: I0905 06:05:12.563622 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-fftn2" podStartSLOduration=34.563605029 podStartE2EDuration="34.563605029s" podCreationTimestamp="2025-09-05 06:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:05:12.563286749 +0000 UTC m=+41.289672410" watchObservedRunningTime="2025-09-05 06:05:12.563605029 +0000 UTC m=+41.289990690" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.451 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--kl9t2-eth0 csi-node-driver- calico-system 4cde16ae-a1c6-4825-89e0-2698e4deec05 728 0 2025-09-05 06:04:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-kl9t2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7cdefc162a8 [] [] }} ContainerID="52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" Namespace="calico-system" Pod="csi-node-driver-kl9t2" WorkloadEndpoint="localhost-k8s-csi--node--driver--kl9t2-" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.451 [INFO][4609] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" Namespace="calico-system" Pod="csi-node-driver-kl9t2" WorkloadEndpoint="localhost-k8s-csi--node--driver--kl9t2-eth0" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.484 [INFO][4637] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" HandleID="k8s-pod-network.52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" Workload="localhost-k8s-csi--node--driver--kl9t2-eth0" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.484 [INFO][4637] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" HandleID="k8s-pod-network.52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" Workload="localhost-k8s-csi--node--driver--kl9t2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c25f0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-kl9t2", "timestamp":"2025-09-05 06:05:12.484024149 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.484 [INFO][4637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.485 [INFO][4637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.485 [INFO][4637] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.499 [INFO][4637] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" host="localhost" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.506 [INFO][4637] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.511 [INFO][4637] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.513 [INFO][4637] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.516 [INFO][4637] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.516 [INFO][4637] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" host="localhost" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.518 [INFO][4637] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74 Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.525 [INFO][4637] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" host="localhost" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.543 [INFO][4637] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" host="localhost" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.543 [INFO][4637] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" host="localhost" Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.543 [INFO][4637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:12.584376 containerd[1527]: 2025-09-05 06:05:12.543 [INFO][4637] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" HandleID="k8s-pod-network.52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" Workload="localhost-k8s-csi--node--driver--kl9t2-eth0" Sep 5 06:05:12.584932 containerd[1527]: 2025-09-05 06:05:12.545 [INFO][4609] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" Namespace="calico-system" Pod="csi-node-driver-kl9t2" WorkloadEndpoint="localhost-k8s-csi--node--driver--kl9t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kl9t2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cde16ae-a1c6-4825-89e0-2698e4deec05", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-kl9t2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7cdefc162a8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:12.584932 containerd[1527]: 2025-09-05 06:05:12.545 [INFO][4609] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" Namespace="calico-system" Pod="csi-node-driver-kl9t2" WorkloadEndpoint="localhost-k8s-csi--node--driver--kl9t2-eth0" Sep 5 06:05:12.584932 containerd[1527]: 2025-09-05 06:05:12.545 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7cdefc162a8 ContainerID="52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" Namespace="calico-system" Pod="csi-node-driver-kl9t2" WorkloadEndpoint="localhost-k8s-csi--node--driver--kl9t2-eth0" Sep 5 06:05:12.584932 containerd[1527]: 2025-09-05 06:05:12.548 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" Namespace="calico-system" Pod="csi-node-driver-kl9t2" WorkloadEndpoint="localhost-k8s-csi--node--driver--kl9t2-eth0" Sep 5 06:05:12.584932 containerd[1527]: 2025-09-05 06:05:12.549 [INFO][4609] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" Namespace="calico-system" Pod="csi-node-driver-kl9t2" WorkloadEndpoint="localhost-k8s-csi--node--driver--kl9t2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--kl9t2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4cde16ae-a1c6-4825-89e0-2698e4deec05", ResourceVersion:"728", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74", Pod:"csi-node-driver-kl9t2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7cdefc162a8", MAC:"a6:d6:5d:79:00:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:12.584932 containerd[1527]: 2025-09-05 06:05:12.580 [INFO][4609] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" Namespace="calico-system" Pod="csi-node-driver-kl9t2" WorkloadEndpoint="localhost-k8s-csi--node--driver--kl9t2-eth0" Sep 5 06:05:12.623116 containerd[1527]: time="2025-09-05T06:05:12.622814138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:12.625118 containerd[1527]: time="2025-09-05T06:05:12.624970859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=44530807" Sep 5 06:05:12.628186 containerd[1527]: time="2025-09-05T06:05:12.628151541Z" level=info msg="ImageCreate event name:\"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:12.632977 containerd[1527]: time="2025-09-05T06:05:12.632926863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:12.634694 containerd[1527]: time="2025-09-05T06:05:12.634501864Z" level=info msg="connecting to shim 52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74" address="unix:///run/containerd/s/2036579982c513c0fbcf02063832207f5d58d3e735144b0a67cf101066b83849" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:12.635899 containerd[1527]: time="2025-09-05T06:05:12.635608904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"45900064\" in 2.058955282s" Sep 5 06:05:12.635899 containerd[1527]: time="2025-09-05T06:05:12.635725984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:632fbde00b1918016ac07458e79cc438ccda83cb762bfd5fc50a26721abced08\"" Sep 5 06:05:12.638479 containerd[1527]: time="2025-09-05T06:05:12.638114546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 5 06:05:12.645345 containerd[1527]: time="2025-09-05T06:05:12.642717068Z" level=info msg="CreateContainer within sandbox \"70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 06:05:12.656528 containerd[1527]: time="2025-09-05T06:05:12.656488435Z" level=info msg="Container 3550b632fe8b81356ddc52d6cd31873c6181bb5ca289683fd749634949551588: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:12.658742 systemd-networkd[1436]: cali53e0c72fc35: Link UP Sep 5 06:05:12.659739 systemd-networkd[1436]: cali53e0c72fc35: Gained carrier Sep 5 06:05:12.668554 systemd[1]: Started cri-containerd-52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74.scope - libcontainer container 52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74. Sep 5 06:05:12.669029 containerd[1527]: time="2025-09-05T06:05:12.668993841Z" level=info msg="CreateContainer within sandbox \"70d61532d7774e6b96b349f275a455e8bef22dbc324d21b9a758c632aa410a3f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3550b632fe8b81356ddc52d6cd31873c6181bb5ca289683fd749634949551588\"" Sep 5 06:05:12.672433 containerd[1527]: time="2025-09-05T06:05:12.672038682Z" level=info msg="StartContainer for \"3550b632fe8b81356ddc52d6cd31873c6181bb5ca289683fd749634949551588\"" Sep 5 06:05:12.674279 containerd[1527]: time="2025-09-05T06:05:12.674243284Z" level=info msg="connecting to shim 3550b632fe8b81356ddc52d6cd31873c6181bb5ca289683fd749634949551588" address="unix:///run/containerd/s/c315fd166f24597a74b6e25579345d1a1e2b9fc3cb4c019cf7f862cad563d0c2" protocol=ttrpc version=3 Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.444 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0 calico-kube-controllers-55995d5b69- calico-system bcaaad9e-3cf8-479b-bef2-88f1eddc24b8 847 0 2025-09-05 06:04:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:55995d5b69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-55995d5b69-tb6ks eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali53e0c72fc35 [] [] }} ContainerID="70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" Namespace="calico-system" Pod="calico-kube-controllers-55995d5b69-tb6ks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.444 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" Namespace="calico-system" Pod="calico-kube-controllers-55995d5b69-tb6ks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.493 [INFO][4631] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" HandleID="k8s-pod-network.70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" Workload="localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.494 [INFO][4631] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" HandleID="k8s-pod-network.70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" Workload="localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400034b200), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-55995d5b69-tb6ks", "timestamp":"2025-09-05 06:05:12.493869274 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.494 [INFO][4631] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.543 [INFO][4631] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.543 [INFO][4631] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.600 [INFO][4631] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" host="localhost" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.615 [INFO][4631] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.621 [INFO][4631] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.627 [INFO][4631] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.632 [INFO][4631] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.632 [INFO][4631] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" host="localhost" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.634 [INFO][4631] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.638 [INFO][4631] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" host="localhost" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.647 [INFO][4631] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" host="localhost" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.647 [INFO][4631] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" host="localhost" Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.648 [INFO][4631] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:12.677877 containerd[1527]: 2025-09-05 06:05:12.648 [INFO][4631] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" HandleID="k8s-pod-network.70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" Workload="localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0" Sep 5 06:05:12.678454 containerd[1527]: 2025-09-05 06:05:12.653 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" Namespace="calico-system" Pod="calico-kube-controllers-55995d5b69-tb6ks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0", GenerateName:"calico-kube-controllers-55995d5b69-", Namespace:"calico-system", SelfLink:"", UID:"bcaaad9e-3cf8-479b-bef2-88f1eddc24b8", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55995d5b69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-55995d5b69-tb6ks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53e0c72fc35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:12.678454 containerd[1527]: 2025-09-05 06:05:12.653 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" Namespace="calico-system" Pod="calico-kube-controllers-55995d5b69-tb6ks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0" Sep 5 06:05:12.678454 containerd[1527]: 2025-09-05 06:05:12.653 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali53e0c72fc35 ContainerID="70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" Namespace="calico-system" Pod="calico-kube-controllers-55995d5b69-tb6ks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0" Sep 5 06:05:12.678454 containerd[1527]: 2025-09-05 06:05:12.659 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" Namespace="calico-system" Pod="calico-kube-controllers-55995d5b69-tb6ks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0" Sep 5 06:05:12.678454 containerd[1527]: 2025-09-05 06:05:12.662 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" Namespace="calico-system" Pod="calico-kube-controllers-55995d5b69-tb6ks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0", GenerateName:"calico-kube-controllers-55995d5b69-", Namespace:"calico-system", SelfLink:"", UID:"bcaaad9e-3cf8-479b-bef2-88f1eddc24b8", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"55995d5b69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb", Pod:"calico-kube-controllers-55995d5b69-tb6ks", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali53e0c72fc35", MAC:"0a:d7:f8:73:62:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:12.678454 containerd[1527]: 2025-09-05 06:05:12.672 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" Namespace="calico-system" Pod="calico-kube-controllers-55995d5b69-tb6ks" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--55995d5b69--tb6ks-eth0" Sep 5 06:05:12.689804 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:12.701557 containerd[1527]: time="2025-09-05T06:05:12.700856097Z" level=info msg="connecting to shim 70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb" address="unix:///run/containerd/s/f6bfe084006a96e8850d2cfe78596818aba2fc88ec42a0424abc13fe6a3ebd47" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:12.701530 systemd[1]: Started cri-containerd-3550b632fe8b81356ddc52d6cd31873c6181bb5ca289683fd749634949551588.scope - libcontainer container 3550b632fe8b81356ddc52d6cd31873c6181bb5ca289683fd749634949551588. Sep 5 06:05:12.706240 containerd[1527]: time="2025-09-05T06:05:12.706204499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-kl9t2,Uid:4cde16ae-a1c6-4825-89e0-2698e4deec05,Namespace:calico-system,Attempt:0,} returns sandbox id \"52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74\"" Sep 5 06:05:12.738551 systemd[1]: Started cri-containerd-70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb.scope - libcontainer container 70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb. Sep 5 06:05:12.749347 containerd[1527]: time="2025-09-05T06:05:12.749300361Z" level=info msg="StartContainer for \"3550b632fe8b81356ddc52d6cd31873c6181bb5ca289683fd749634949551588\" returns successfully" Sep 5 06:05:12.754241 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:12.781215 containerd[1527]: time="2025-09-05T06:05:12.781176577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-55995d5b69-tb6ks,Uid:bcaaad9e-3cf8-479b-bef2-88f1eddc24b8,Namespace:calico-system,Attempt:0,} returns sandbox id \"70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb\"" Sep 5 06:05:13.484810 systemd-networkd[1436]: calieeb88d71e40: Gained IPv6LL Sep 5 06:05:13.537079 kubelet[2683]: I0905 06:05:13.536444 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7968c94f79-l6q6t" podStartSLOduration=25.475646571 podStartE2EDuration="27.536427214s" podCreationTimestamp="2025-09-05 06:04:46 +0000 UTC" firstStartedPulling="2025-09-05 06:05:10.576269622 +0000 UTC m=+39.302655283" lastFinishedPulling="2025-09-05 06:05:12.637050265 +0000 UTC m=+41.363435926" observedRunningTime="2025-09-05 06:05:13.535301854 +0000 UTC m=+42.261687515" watchObservedRunningTime="2025-09-05 06:05:13.536427214 +0000 UTC m=+42.262812875" Sep 5 06:05:13.613977 systemd-networkd[1436]: cali904e9f7c96f: Gained IPv6LL Sep 5 06:05:13.933371 systemd-networkd[1436]: cali7cdefc162a8: Gained IPv6LL Sep 5 06:05:13.996937 systemd-networkd[1436]: cali53e0c72fc35: Gained IPv6LL Sep 5 06:05:14.371467 containerd[1527]: time="2025-09-05T06:05:14.371344751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cx2nl,Uid:18fc31d9-056e-48eb-b2ea-0cfe91c332c1,Namespace:kube-system,Attempt:0,}" Sep 5 06:05:14.371909 containerd[1527]: time="2025-09-05T06:05:14.371498311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968c94f79-7db7j,Uid:c27cfba9-9f26-49f2-945a-fd21b34afb17,Namespace:calico-apiserver,Attempt:0,}" Sep 5 06:05:14.495351 systemd-networkd[1436]: cali6565917ba9a: Link UP Sep 5 06:05:14.496062 systemd-networkd[1436]: cali6565917ba9a: Gained carrier Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.412 [INFO][4821] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0 coredns-674b8bbfcf- kube-system 18fc31d9-056e-48eb-b2ea-0cfe91c332c1 844 0 2025-09-05 06:04:38 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-cx2nl eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6565917ba9a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cx2nl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cx2nl-" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.413 [INFO][4821] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cx2nl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.445 [INFO][4850] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" HandleID="k8s-pod-network.dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" Workload="localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.445 [INFO][4850] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" HandleID="k8s-pod-network.dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" Workload="localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005828d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-cx2nl", "timestamp":"2025-09-05 06:05:14.445147663 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.445 [INFO][4850] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.445 [INFO][4850] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.445 [INFO][4850] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.459 [INFO][4850] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" host="localhost" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.463 [INFO][4850] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.467 [INFO][4850] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.469 [INFO][4850] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.472 [INFO][4850] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.472 [INFO][4850] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" host="localhost" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.474 [INFO][4850] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17 Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.478 [INFO][4850] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" host="localhost" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.484 [INFO][4850] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" host="localhost" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.484 [INFO][4850] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" host="localhost" Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.484 [INFO][4850] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:14.510523 containerd[1527]: 2025-09-05 06:05:14.484 [INFO][4850] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" HandleID="k8s-pod-network.dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" Workload="localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0" Sep 5 06:05:14.511023 containerd[1527]: 2025-09-05 06:05:14.491 [INFO][4821] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cx2nl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"18fc31d9-056e-48eb-b2ea-0cfe91c332c1", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-cx2nl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6565917ba9a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:14.511023 containerd[1527]: 2025-09-05 06:05:14.492 [INFO][4821] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cx2nl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0" Sep 5 06:05:14.511023 containerd[1527]: 2025-09-05 06:05:14.492 [INFO][4821] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6565917ba9a ContainerID="dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cx2nl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0" Sep 5 06:05:14.511023 containerd[1527]: 2025-09-05 06:05:14.495 [INFO][4821] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cx2nl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0" Sep 5 06:05:14.511023 containerd[1527]: 2025-09-05 06:05:14.495 [INFO][4821] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cx2nl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"18fc31d9-056e-48eb-b2ea-0cfe91c332c1", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17", Pod:"coredns-674b8bbfcf-cx2nl", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6565917ba9a", MAC:"ea:6f:2a:7d:c9:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:14.511023 containerd[1527]: 2025-09-05 06:05:14.506 [INFO][4821] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" Namespace="kube-system" Pod="coredns-674b8bbfcf-cx2nl" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--cx2nl-eth0" Sep 5 06:05:14.530197 kubelet[2683]: I0905 06:05:14.530146 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:05:14.535613 containerd[1527]: time="2025-09-05T06:05:14.535571623Z" level=info msg="connecting to shim dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17" address="unix:///run/containerd/s/198e46fc1cd03d4ddc791f7a2958fbf228d2d937241c6b646fc0c5160d59b94a" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:14.558590 systemd[1]: Started cri-containerd-dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17.scope - libcontainer container dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17. Sep 5 06:05:14.577831 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:14.603234 systemd-networkd[1436]: calie98abed7792: Link UP Sep 5 06:05:14.604798 systemd-networkd[1436]: calie98abed7792: Gained carrier Sep 5 06:05:14.610234 containerd[1527]: time="2025-09-05T06:05:14.610195735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-cx2nl,Uid:18fc31d9-056e-48eb-b2ea-0cfe91c332c1,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17\"" Sep 5 06:05:14.616004 containerd[1527]: time="2025-09-05T06:05:14.615964378Z" level=info msg="CreateContainer within sandbox \"dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 5 06:05:14.624628 containerd[1527]: time="2025-09-05T06:05:14.624439661Z" level=info msg="Container 29426485c1c30b482222d6b95d554c6b53cfb93c98b34efbbae74115aaea9ec5: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.420 [INFO][4833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0 calico-apiserver-7968c94f79- calico-apiserver c27cfba9-9f26-49f2-945a-fd21b34afb17 850 0 2025-09-05 06:04:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7968c94f79 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7968c94f79-7db7j eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie98abed7792 [] [] }} ContainerID="92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-7db7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--7db7j-" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.420 [INFO][4833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-7db7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.449 [INFO][4856] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" HandleID="k8s-pod-network.92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" Workload="localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.449 [INFO][4856] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" HandleID="k8s-pod-network.92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" Workload="localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d750), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7968c94f79-7db7j", "timestamp":"2025-09-05 06:05:14.449538585 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.449 [INFO][4856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.484 [INFO][4856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.484 [INFO][4856] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.557 [INFO][4856] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" host="localhost" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.566 [INFO][4856] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.572 [INFO][4856] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.577 [INFO][4856] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.580 [INFO][4856] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.580 [INFO][4856] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" host="localhost" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.582 [INFO][4856] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063 Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.588 [INFO][4856] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" host="localhost" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.596 [INFO][4856] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" host="localhost" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.596 [INFO][4856] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" host="localhost" Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.597 [INFO][4856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 5 06:05:14.626377 containerd[1527]: 2025-09-05 06:05:14.597 [INFO][4856] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" HandleID="k8s-pod-network.92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" Workload="localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0" Sep 5 06:05:14.627005 containerd[1527]: 2025-09-05 06:05:14.599 [INFO][4833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-7db7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0", GenerateName:"calico-apiserver-7968c94f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"c27cfba9-9f26-49f2-945a-fd21b34afb17", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7968c94f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7968c94f79-7db7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie98abed7792", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:14.627005 containerd[1527]: 2025-09-05 06:05:14.600 [INFO][4833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-7db7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0" Sep 5 06:05:14.627005 containerd[1527]: 2025-09-05 06:05:14.600 [INFO][4833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie98abed7792 ContainerID="92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-7db7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0" Sep 5 06:05:14.627005 containerd[1527]: 2025-09-05 06:05:14.605 [INFO][4833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-7db7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0" Sep 5 06:05:14.627005 containerd[1527]: 2025-09-05 06:05:14.605 [INFO][4833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-7db7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0", GenerateName:"calico-apiserver-7968c94f79-", Namespace:"calico-apiserver", SelfLink:"", UID:"c27cfba9-9f26-49f2-945a-fd21b34afb17", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2025, time.September, 5, 6, 4, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7968c94f79", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063", Pod:"calico-apiserver-7968c94f79-7db7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie98abed7792", MAC:"4a:c4:7c:37:9e:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 5 06:05:14.627005 containerd[1527]: 2025-09-05 06:05:14.620 [INFO][4833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" Namespace="calico-apiserver" Pod="calico-apiserver-7968c94f79-7db7j" WorkloadEndpoint="localhost-k8s-calico--apiserver--7968c94f79--7db7j-eth0" Sep 5 06:05:14.638618 containerd[1527]: time="2025-09-05T06:05:14.638574228Z" level=info msg="CreateContainer within sandbox \"dd91a5f2e1f36c33906cfabadfdd5ad2499942b1ec70f3ebf1b96b60b80d0e17\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"29426485c1c30b482222d6b95d554c6b53cfb93c98b34efbbae74115aaea9ec5\"" Sep 5 06:05:14.640178 containerd[1527]: time="2025-09-05T06:05:14.639426188Z" level=info msg="StartContainer for \"29426485c1c30b482222d6b95d554c6b53cfb93c98b34efbbae74115aaea9ec5\"" Sep 5 06:05:14.641034 containerd[1527]: time="2025-09-05T06:05:14.640965909Z" level=info msg="connecting to shim 29426485c1c30b482222d6b95d554c6b53cfb93c98b34efbbae74115aaea9ec5" address="unix:///run/containerd/s/198e46fc1cd03d4ddc791f7a2958fbf228d2d937241c6b646fc0c5160d59b94a" protocol=ttrpc version=3 Sep 5 06:05:14.657607 containerd[1527]: time="2025-09-05T06:05:14.657558836Z" level=info msg="connecting to shim 92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063" address="unix:///run/containerd/s/9d56ab5b11ba09eb2c50e9ada3d7fd666b586d5f1fc0e23b0948091f25d5a04d" namespace=k8s.io protocol=ttrpc version=3 Sep 5 06:05:14.667647 systemd[1]: Started cri-containerd-29426485c1c30b482222d6b95d554c6b53cfb93c98b34efbbae74115aaea9ec5.scope - libcontainer container 29426485c1c30b482222d6b95d554c6b53cfb93c98b34efbbae74115aaea9ec5. Sep 5 06:05:14.686563 systemd[1]: Started cri-containerd-92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063.scope - libcontainer container 92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063. Sep 5 06:05:14.709404 containerd[1527]: time="2025-09-05T06:05:14.709352578Z" level=info msg="StartContainer for \"29426485c1c30b482222d6b95d554c6b53cfb93c98b34efbbae74115aaea9ec5\" returns successfully" Sep 5 06:05:14.720089 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 5 06:05:14.817570 containerd[1527]: time="2025-09-05T06:05:14.817530226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7968c94f79-7db7j,Uid:c27cfba9-9f26-49f2-945a-fd21b34afb17,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063\"" Sep 5 06:05:14.822095 containerd[1527]: time="2025-09-05T06:05:14.822055787Z" level=info msg="CreateContainer within sandbox \"92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 5 06:05:14.831592 containerd[1527]: time="2025-09-05T06:05:14.831560672Z" level=info msg="Container 25167d183e2b733182834e8778a0748b292bc7d030736aababa9cb22ed8753fe: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:14.839496 containerd[1527]: time="2025-09-05T06:05:14.839462155Z" level=info msg="CreateContainer within sandbox \"92048e50d94276ee38e434446ae2f98e35e0165d4d74eb0a46265db618c11063\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"25167d183e2b733182834e8778a0748b292bc7d030736aababa9cb22ed8753fe\"" Sep 5 06:05:14.839915 containerd[1527]: time="2025-09-05T06:05:14.839885595Z" level=info msg="StartContainer for \"25167d183e2b733182834e8778a0748b292bc7d030736aababa9cb22ed8753fe\"" Sep 5 06:05:14.841471 containerd[1527]: time="2025-09-05T06:05:14.841440116Z" level=info msg="connecting to shim 25167d183e2b733182834e8778a0748b292bc7d030736aababa9cb22ed8753fe" address="unix:///run/containerd/s/9d56ab5b11ba09eb2c50e9ada3d7fd666b586d5f1fc0e23b0948091f25d5a04d" protocol=ttrpc version=3 Sep 5 06:05:14.865555 systemd[1]: Started cri-containerd-25167d183e2b733182834e8778a0748b292bc7d030736aababa9cb22ed8753fe.scope - libcontainer container 25167d183e2b733182834e8778a0748b292bc7d030736aababa9cb22ed8753fe. Sep 5 06:05:14.906605 containerd[1527]: time="2025-09-05T06:05:14.906509704Z" level=info msg="StartContainer for \"25167d183e2b733182834e8778a0748b292bc7d030736aababa9cb22ed8753fe\" returns successfully" Sep 5 06:05:15.052354 containerd[1527]: time="2025-09-05T06:05:15.052307086Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:15.053044 containerd[1527]: time="2025-09-05T06:05:15.053010487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=61845332" Sep 5 06:05:15.054607 containerd[1527]: time="2025-09-05T06:05:15.053747527Z" level=info msg="ImageCreate event name:\"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:15.056143 containerd[1527]: time="2025-09-05T06:05:15.056079808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:15.057120 containerd[1527]: time="2025-09-05T06:05:15.057083608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"61845178\" in 2.418922422s" Sep 5 06:05:15.057185 containerd[1527]: time="2025-09-05T06:05:15.057115848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:14088376331a0622b7f6a2fbc2f2932806a6eafdd7b602f6139d3b985bf1e685\"" Sep 5 06:05:15.058816 containerd[1527]: time="2025-09-05T06:05:15.058789929Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 5 06:05:15.062201 containerd[1527]: time="2025-09-05T06:05:15.062142450Z" level=info msg="CreateContainer within sandbox \"3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 5 06:05:15.088424 containerd[1527]: time="2025-09-05T06:05:15.088370981Z" level=info msg="Container 3c6fe0eda2ddf1e84332a3771eacd6859aa0385f7b651e176bec296ac5ad666b: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:15.094122 containerd[1527]: time="2025-09-05T06:05:15.094074663Z" level=info msg="CreateContainer within sandbox \"3330af954d572d397355e1900d5b3bd87ab7413b2e1e5f5144d55d68a7a11e8b\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"3c6fe0eda2ddf1e84332a3771eacd6859aa0385f7b651e176bec296ac5ad666b\"" Sep 5 06:05:15.094931 containerd[1527]: time="2025-09-05T06:05:15.094908744Z" level=info msg="StartContainer for \"3c6fe0eda2ddf1e84332a3771eacd6859aa0385f7b651e176bec296ac5ad666b\"" Sep 5 06:05:15.096091 containerd[1527]: time="2025-09-05T06:05:15.096067824Z" level=info msg="connecting to shim 3c6fe0eda2ddf1e84332a3771eacd6859aa0385f7b651e176bec296ac5ad666b" address="unix:///run/containerd/s/be31c01b91e6c814eeaf1a4ac0992c28c1e70d425be89fa50877e143b7d30ff5" protocol=ttrpc version=3 Sep 5 06:05:15.114532 systemd[1]: Started cri-containerd-3c6fe0eda2ddf1e84332a3771eacd6859aa0385f7b651e176bec296ac5ad666b.scope - libcontainer container 3c6fe0eda2ddf1e84332a3771eacd6859aa0385f7b651e176bec296ac5ad666b. Sep 5 06:05:15.160567 containerd[1527]: time="2025-09-05T06:05:15.160534771Z" level=info msg="StartContainer for \"3c6fe0eda2ddf1e84332a3771eacd6859aa0385f7b651e176bec296ac5ad666b\" returns successfully" Sep 5 06:05:15.382793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2975332125.mount: Deactivated successfully. Sep 5 06:05:15.532728 systemd-networkd[1436]: cali6565917ba9a: Gained IPv6LL Sep 5 06:05:15.563759 kubelet[2683]: I0905 06:05:15.563661 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-bj7r9" podStartSLOduration=20.234421732 podStartE2EDuration="23.563645375s" podCreationTimestamp="2025-09-05 06:04:52 +0000 UTC" firstStartedPulling="2025-09-05 06:05:11.728605086 +0000 UTC m=+40.454990747" lastFinishedPulling="2025-09-05 06:05:15.057828729 +0000 UTC m=+43.784214390" observedRunningTime="2025-09-05 06:05:15.561761134 +0000 UTC m=+44.288146755" watchObservedRunningTime="2025-09-05 06:05:15.563645375 +0000 UTC m=+44.290030996" Sep 5 06:05:15.584768 kubelet[2683]: I0905 06:05:15.584711 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7968c94f79-7db7j" podStartSLOduration=29.584694984 podStartE2EDuration="29.584694984s" podCreationTimestamp="2025-09-05 06:04:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:05:15.583530543 +0000 UTC m=+44.309916204" watchObservedRunningTime="2025-09-05 06:05:15.584694984 +0000 UTC m=+44.311080645" Sep 5 06:05:16.492565 systemd-networkd[1436]: calie98abed7792: Gained IPv6LL Sep 5 06:05:16.542927 kubelet[2683]: I0905 06:05:16.542897 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:05:16.543295 kubelet[2683]: I0905 06:05:16.543231 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:05:16.583501 containerd[1527]: time="2025-09-05T06:05:16.583451417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8227489" Sep 5 06:05:16.585727 containerd[1527]: time="2025-09-05T06:05:16.583557537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:16.585727 containerd[1527]: time="2025-09-05T06:05:16.584299097Z" level=info msg="ImageCreate event name:\"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:16.587227 containerd[1527]: time="2025-09-05T06:05:16.587191138Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:16.587994 containerd[1527]: time="2025-09-05T06:05:16.587879058Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"9596730\" in 1.529062009s" Sep 5 06:05:16.587994 containerd[1527]: time="2025-09-05T06:05:16.587912138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:5e2b30128ce4b607acd97d3edef62ce1a90be0259903090a51c360adbe4a8f3b\"" Sep 5 06:05:16.588970 containerd[1527]: time="2025-09-05T06:05:16.588909939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 5 06:05:16.593353 containerd[1527]: time="2025-09-05T06:05:16.593298140Z" level=info msg="CreateContainer within sandbox \"52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 5 06:05:16.600663 containerd[1527]: time="2025-09-05T06:05:16.600616423Z" level=info msg="Container 8a41d892189af2ff227efaf93516e903915c7273301feed3d50642ee86d814c3: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:16.609197 containerd[1527]: time="2025-09-05T06:05:16.609140706Z" level=info msg="CreateContainer within sandbox \"52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8a41d892189af2ff227efaf93516e903915c7273301feed3d50642ee86d814c3\"" Sep 5 06:05:16.610048 containerd[1527]: time="2025-09-05T06:05:16.610001227Z" level=info msg="StartContainer for \"8a41d892189af2ff227efaf93516e903915c7273301feed3d50642ee86d814c3\"" Sep 5 06:05:16.614135 containerd[1527]: time="2025-09-05T06:05:16.612912108Z" level=info msg="connecting to shim 8a41d892189af2ff227efaf93516e903915c7273301feed3d50642ee86d814c3" address="unix:///run/containerd/s/2036579982c513c0fbcf02063832207f5d58d3e735144b0a67cf101066b83849" protocol=ttrpc version=3 Sep 5 06:05:16.647609 systemd[1]: Started cri-containerd-8a41d892189af2ff227efaf93516e903915c7273301feed3d50642ee86d814c3.scope - libcontainer container 8a41d892189af2ff227efaf93516e903915c7273301feed3d50642ee86d814c3. Sep 5 06:05:16.700978 containerd[1527]: time="2025-09-05T06:05:16.700942062Z" level=info msg="StartContainer for \"8a41d892189af2ff227efaf93516e903915c7273301feed3d50642ee86d814c3\" returns successfully" Sep 5 06:05:16.936626 systemd[1]: Started sshd@8-10.0.0.144:22-10.0.0.1:53300.service - OpenSSH per-connection server daemon (10.0.0.1:53300). Sep 5 06:05:17.005529 sshd[5130]: Accepted publickey for core from 10.0.0.1 port 53300 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:17.007031 sshd-session[5130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:17.010772 systemd-logind[1502]: New session 9 of user core. Sep 5 06:05:17.018564 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 5 06:05:17.201636 sshd[5133]: Connection closed by 10.0.0.1 port 53300 Sep 5 06:05:17.200944 sshd-session[5130]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:17.205234 systemd[1]: sshd@8-10.0.0.144:22-10.0.0.1:53300.service: Deactivated successfully. Sep 5 06:05:17.205668 systemd-logind[1502]: Session 9 logged out. Waiting for processes to exit. Sep 5 06:05:17.207674 systemd[1]: session-9.scope: Deactivated successfully. Sep 5 06:05:17.210297 systemd-logind[1502]: Removed session 9. Sep 5 06:05:17.877104 kubelet[2683]: I0905 06:05:17.876896 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:05:18.004935 containerd[1527]: time="2025-09-05T06:05:18.004891696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6835738f65081a8bfa44a3755973a20af1ed5c278135981c4ba027e25af29196\" id:\"52726ff2bab64116f59d9b2384e2981733b7c3b305f44439fd0c7732112b58c5\" pid:5166 exited_at:{seconds:1757052318 nanos:4610096}" Sep 5 06:05:18.029372 kubelet[2683]: I0905 06:05:18.029319 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-cx2nl" podStartSLOduration=40.029301665 podStartE2EDuration="40.029301665s" podCreationTimestamp="2025-09-05 06:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-05 06:05:15.605198992 +0000 UTC m=+44.331584653" watchObservedRunningTime="2025-09-05 06:05:18.029301665 +0000 UTC m=+46.755687326" Sep 5 06:05:18.090828 containerd[1527]: time="2025-09-05T06:05:18.090767405Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6835738f65081a8bfa44a3755973a20af1ed5c278135981c4ba027e25af29196\" id:\"c775b96e6a9c242efb04cbeaa781f070466d2d04dddb68a910bb1d873639b6d4\" pid:5191 exited_at:{seconds:1757052318 nanos:90366205}" Sep 5 06:05:19.233087 containerd[1527]: time="2025-09-05T06:05:19.232429625Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:19.233087 containerd[1527]: time="2025-09-05T06:05:19.233047545Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=48134957" Sep 5 06:05:19.233837 containerd[1527]: time="2025-09-05T06:05:19.233810465Z" level=info msg="ImageCreate event name:\"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:19.236099 containerd[1527]: time="2025-09-05T06:05:19.236072546Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:19.236632 containerd[1527]: time="2025-09-05T06:05:19.236561906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"49504166\" in 2.647620727s" Sep 5 06:05:19.236632 containerd[1527]: time="2025-09-05T06:05:19.236595746Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:34117caf92350e1565610f2254377d7455b11e36666b5ce11b4a13670720432a\"" Sep 5 06:05:19.237927 containerd[1527]: time="2025-09-05T06:05:19.237736706Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 5 06:05:19.249823 containerd[1527]: time="2025-09-05T06:05:19.249790630Z" level=info msg="CreateContainer within sandbox \"70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 5 06:05:19.263407 containerd[1527]: time="2025-09-05T06:05:19.263086034Z" level=info msg="Container 23acdc8fcbceae71cddfa20d4bc68bc5c12413ef3f6a1c41690c4df918db2458: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:19.269766 containerd[1527]: time="2025-09-05T06:05:19.269728956Z" level=info msg="CreateContainer within sandbox \"70533277bee0907f33ad289130eec0fe6e0048a053549695bc27dc6b9578c8fb\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"23acdc8fcbceae71cddfa20d4bc68bc5c12413ef3f6a1c41690c4df918db2458\"" Sep 5 06:05:19.270321 containerd[1527]: time="2025-09-05T06:05:19.270300956Z" level=info msg="StartContainer for \"23acdc8fcbceae71cddfa20d4bc68bc5c12413ef3f6a1c41690c4df918db2458\"" Sep 5 06:05:19.273200 containerd[1527]: time="2025-09-05T06:05:19.273171077Z" level=info msg="connecting to shim 23acdc8fcbceae71cddfa20d4bc68bc5c12413ef3f6a1c41690c4df918db2458" address="unix:///run/containerd/s/f6bfe084006a96e8850d2cfe78596818aba2fc88ec42a0424abc13fe6a3ebd47" protocol=ttrpc version=3 Sep 5 06:05:19.295556 systemd[1]: Started cri-containerd-23acdc8fcbceae71cddfa20d4bc68bc5c12413ef3f6a1c41690c4df918db2458.scope - libcontainer container 23acdc8fcbceae71cddfa20d4bc68bc5c12413ef3f6a1c41690c4df918db2458. Sep 5 06:05:19.343458 containerd[1527]: time="2025-09-05T06:05:19.343323460Z" level=info msg="StartContainer for \"23acdc8fcbceae71cddfa20d4bc68bc5c12413ef3f6a1c41690c4df918db2458\" returns successfully" Sep 5 06:05:19.568899 kubelet[2683]: I0905 06:05:19.568767 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-55995d5b69-tb6ks" podStartSLOduration=21.113877762 podStartE2EDuration="27.568749851s" podCreationTimestamp="2025-09-05 06:04:52 +0000 UTC" firstStartedPulling="2025-09-05 06:05:12.782718537 +0000 UTC m=+41.509104198" lastFinishedPulling="2025-09-05 06:05:19.237590506 +0000 UTC m=+47.963976287" observedRunningTime="2025-09-05 06:05:19.56720025 +0000 UTC m=+48.293585871" watchObservedRunningTime="2025-09-05 06:05:19.568749851 +0000 UTC m=+48.295135512" Sep 5 06:05:19.594060 containerd[1527]: time="2025-09-05T06:05:19.594003419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23acdc8fcbceae71cddfa20d4bc68bc5c12413ef3f6a1c41690c4df918db2458\" id:\"8cc5b7583ee7e65ea01fb3ffd216f6733f6077aa90a2594037eb903c974a812b\" pid:5262 exited_at:{seconds:1757052319 nanos:584336416}" Sep 5 06:05:20.855115 containerd[1527]: time="2025-09-05T06:05:20.855058599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:20.855941 containerd[1527]: time="2025-09-05T06:05:20.855745880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=13761208" Sep 5 06:05:20.856783 containerd[1527]: time="2025-09-05T06:05:20.856672400Z" level=info msg="ImageCreate event name:\"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:20.858898 containerd[1527]: time="2025-09-05T06:05:20.858859241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 5 06:05:20.859785 containerd[1527]: time="2025-09-05T06:05:20.859755441Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"15130401\" in 1.621987695s" Sep 5 06:05:20.859889 containerd[1527]: time="2025-09-05T06:05:20.859865961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:a319b5bdc1001e98875b68e2943279adb74bcb19d09f1db857bc27959a078a65\"" Sep 5 06:05:20.864865 containerd[1527]: time="2025-09-05T06:05:20.864834082Z" level=info msg="CreateContainer within sandbox \"52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 5 06:05:20.874247 containerd[1527]: time="2025-09-05T06:05:20.873158885Z" level=info msg="Container 00e8c4892797499d292e5745813424bde42d42208f89eba5de142286928aa61f: CDI devices from CRI Config.CDIDevices: []" Sep 5 06:05:20.880040 containerd[1527]: time="2025-09-05T06:05:20.879990167Z" level=info msg="CreateContainer within sandbox \"52678f9c723cb4164ec1a9eed64dd78c1b1f76b4127fcb9a618ab2fa5c978a74\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"00e8c4892797499d292e5745813424bde42d42208f89eba5de142286928aa61f\"" Sep 5 06:05:20.881166 containerd[1527]: time="2025-09-05T06:05:20.881138407Z" level=info msg="StartContainer for \"00e8c4892797499d292e5745813424bde42d42208f89eba5de142286928aa61f\"" Sep 5 06:05:20.882760 containerd[1527]: time="2025-09-05T06:05:20.882725448Z" level=info msg="connecting to shim 00e8c4892797499d292e5745813424bde42d42208f89eba5de142286928aa61f" address="unix:///run/containerd/s/2036579982c513c0fbcf02063832207f5d58d3e735144b0a67cf101066b83849" protocol=ttrpc version=3 Sep 5 06:05:20.904535 systemd[1]: Started cri-containerd-00e8c4892797499d292e5745813424bde42d42208f89eba5de142286928aa61f.scope - libcontainer container 00e8c4892797499d292e5745813424bde42d42208f89eba5de142286928aa61f. Sep 5 06:05:20.939064 containerd[1527]: time="2025-09-05T06:05:20.939018464Z" level=info msg="StartContainer for \"00e8c4892797499d292e5745813424bde42d42208f89eba5de142286928aa61f\" returns successfully" Sep 5 06:05:21.430914 kubelet[2683]: I0905 06:05:21.430869 2683 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 5 06:05:21.433159 kubelet[2683]: I0905 06:05:21.433129 2683 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 5 06:05:22.217292 systemd[1]: Started sshd@9-10.0.0.144:22-10.0.0.1:48914.service - OpenSSH per-connection server daemon (10.0.0.1:48914). Sep 5 06:05:22.277756 sshd[5317]: Accepted publickey for core from 10.0.0.1 port 48914 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:22.279469 sshd-session[5317]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:22.283672 systemd-logind[1502]: New session 10 of user core. Sep 5 06:05:22.292539 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 5 06:05:22.471359 sshd[5320]: Connection closed by 10.0.0.1 port 48914 Sep 5 06:05:22.471869 sshd-session[5317]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:22.482598 systemd[1]: sshd@9-10.0.0.144:22-10.0.0.1:48914.service: Deactivated successfully. Sep 5 06:05:22.484465 systemd[1]: session-10.scope: Deactivated successfully. Sep 5 06:05:22.485220 systemd-logind[1502]: Session 10 logged out. Waiting for processes to exit. Sep 5 06:05:22.487898 systemd[1]: Started sshd@10-10.0.0.144:22-10.0.0.1:48916.service - OpenSSH per-connection server daemon (10.0.0.1:48916). Sep 5 06:05:22.489713 systemd-logind[1502]: Removed session 10. Sep 5 06:05:22.547284 sshd[5334]: Accepted publickey for core from 10.0.0.1 port 48916 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:22.548691 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:22.552948 systemd-logind[1502]: New session 11 of user core. Sep 5 06:05:22.562547 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 5 06:05:22.804488 sshd[5337]: Connection closed by 10.0.0.1 port 48916 Sep 5 06:05:22.805180 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:22.815129 systemd[1]: sshd@10-10.0.0.144:22-10.0.0.1:48916.service: Deactivated successfully. Sep 5 06:05:22.816745 systemd[1]: session-11.scope: Deactivated successfully. Sep 5 06:05:22.819249 systemd-logind[1502]: Session 11 logged out. Waiting for processes to exit. Sep 5 06:05:22.822117 systemd[1]: Started sshd@11-10.0.0.144:22-10.0.0.1:48924.service - OpenSSH per-connection server daemon (10.0.0.1:48924). Sep 5 06:05:22.824189 systemd-logind[1502]: Removed session 11. Sep 5 06:05:22.880107 sshd[5349]: Accepted publickey for core from 10.0.0.1 port 48924 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:22.881717 sshd-session[5349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:22.887963 systemd-logind[1502]: New session 12 of user core. Sep 5 06:05:22.899580 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 5 06:05:23.053537 sshd[5352]: Connection closed by 10.0.0.1 port 48924 Sep 5 06:05:23.053880 sshd-session[5349]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:23.057446 systemd[1]: sshd@11-10.0.0.144:22-10.0.0.1:48924.service: Deactivated successfully. Sep 5 06:05:23.059185 systemd[1]: session-12.scope: Deactivated successfully. Sep 5 06:05:23.059842 systemd-logind[1502]: Session 12 logged out. Waiting for processes to exit. Sep 5 06:05:23.060855 systemd-logind[1502]: Removed session 12. Sep 5 06:05:23.795727 kubelet[2683]: I0905 06:05:23.795612 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 5 06:05:23.880660 containerd[1527]: time="2025-09-05T06:05:23.880615794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c6fe0eda2ddf1e84332a3771eacd6859aa0385f7b651e176bec296ac5ad666b\" id:\"d56717fdd369ef62eeffb6202f95249dbe95f7a78a77a8e1b99f37421d68af6e\" pid:5380 exited_at:{seconds:1757052323 nanos:880163554}" Sep 5 06:05:23.897277 kubelet[2683]: I0905 06:05:23.896857 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-kl9t2" podStartSLOduration=24.743825257 podStartE2EDuration="32.896837518s" podCreationTimestamp="2025-09-05 06:04:51 +0000 UTC" firstStartedPulling="2025-09-05 06:05:12.7076081 +0000 UTC m=+41.433993721" lastFinishedPulling="2025-09-05 06:05:20.860620361 +0000 UTC m=+49.587005982" observedRunningTime="2025-09-05 06:05:21.575711322 +0000 UTC m=+50.302096983" watchObservedRunningTime="2025-09-05 06:05:23.896837518 +0000 UTC m=+52.623223179" Sep 5 06:05:23.994854 containerd[1527]: time="2025-09-05T06:05:23.994811782Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3c6fe0eda2ddf1e84332a3771eacd6859aa0385f7b651e176bec296ac5ad666b\" id:\"5d15a68190d17ed56d1bf3f9ca821041571c2f447d08d685c71ab389a50450be\" pid:5404 exited_at:{seconds:1757052323 nanos:994266742}" Sep 5 06:05:28.071573 systemd[1]: Started sshd@12-10.0.0.144:22-10.0.0.1:48934.service - OpenSSH per-connection server daemon (10.0.0.1:48934). Sep 5 06:05:28.113651 sshd[5426]: Accepted publickey for core from 10.0.0.1 port 48934 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:28.115103 sshd-session[5426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:28.118943 systemd-logind[1502]: New session 13 of user core. Sep 5 06:05:28.132530 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 5 06:05:28.270950 sshd[5429]: Connection closed by 10.0.0.1 port 48934 Sep 5 06:05:28.271600 sshd-session[5426]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:28.283848 systemd[1]: sshd@12-10.0.0.144:22-10.0.0.1:48934.service: Deactivated successfully. Sep 5 06:05:28.286746 systemd[1]: session-13.scope: Deactivated successfully. Sep 5 06:05:28.287522 systemd-logind[1502]: Session 13 logged out. Waiting for processes to exit. Sep 5 06:05:28.289940 systemd[1]: Started sshd@13-10.0.0.144:22-10.0.0.1:48944.service - OpenSSH per-connection server daemon (10.0.0.1:48944). Sep 5 06:05:28.290567 systemd-logind[1502]: Removed session 13. Sep 5 06:05:28.355579 sshd[5442]: Accepted publickey for core from 10.0.0.1 port 48944 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:28.357328 sshd-session[5442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:28.361016 systemd-logind[1502]: New session 14 of user core. Sep 5 06:05:28.371526 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 5 06:05:28.562446 sshd[5445]: Connection closed by 10.0.0.1 port 48944 Sep 5 06:05:28.561962 sshd-session[5442]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:28.576524 systemd[1]: sshd@13-10.0.0.144:22-10.0.0.1:48944.service: Deactivated successfully. Sep 5 06:05:28.578832 systemd[1]: session-14.scope: Deactivated successfully. Sep 5 06:05:28.580846 systemd-logind[1502]: Session 14 logged out. Waiting for processes to exit. Sep 5 06:05:28.582749 systemd[1]: Started sshd@14-10.0.0.144:22-10.0.0.1:48958.service - OpenSSH per-connection server daemon (10.0.0.1:48958). Sep 5 06:05:28.584269 systemd-logind[1502]: Removed session 14. Sep 5 06:05:28.631158 sshd[5456]: Accepted publickey for core from 10.0.0.1 port 48958 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:28.632331 sshd-session[5456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:28.636003 systemd-logind[1502]: New session 15 of user core. Sep 5 06:05:28.642525 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 5 06:05:29.286512 sshd[5459]: Connection closed by 10.0.0.1 port 48958 Sep 5 06:05:29.285456 sshd-session[5456]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:29.305021 systemd[1]: sshd@14-10.0.0.144:22-10.0.0.1:48958.service: Deactivated successfully. Sep 5 06:05:29.310743 systemd[1]: session-15.scope: Deactivated successfully. Sep 5 06:05:29.315624 systemd-logind[1502]: Session 15 logged out. Waiting for processes to exit. Sep 5 06:05:29.317985 systemd[1]: Started sshd@15-10.0.0.144:22-10.0.0.1:48966.service - OpenSSH per-connection server daemon (10.0.0.1:48966). Sep 5 06:05:29.321776 systemd-logind[1502]: Removed session 15. Sep 5 06:05:29.381421 sshd[5476]: Accepted publickey for core from 10.0.0.1 port 48966 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:29.382029 sshd-session[5476]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:29.387036 systemd-logind[1502]: New session 16 of user core. Sep 5 06:05:29.397526 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 5 06:05:29.559696 containerd[1527]: time="2025-09-05T06:05:29.559589443Z" level=info msg="TaskExit event in podsandbox handler container_id:\"23acdc8fcbceae71cddfa20d4bc68bc5c12413ef3f6a1c41690c4df918db2458\" id:\"843257d998c7612d30c16f274b92a75c9fbd6023b365a800903b348498b92bcb\" pid:5504 exited_at:{seconds:1757052329 nanos:558670923}" Sep 5 06:05:29.708350 sshd[5485]: Connection closed by 10.0.0.1 port 48966 Sep 5 06:05:29.708646 sshd-session[5476]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:29.721815 systemd[1]: sshd@15-10.0.0.144:22-10.0.0.1:48966.service: Deactivated successfully. Sep 5 06:05:29.723916 systemd[1]: session-16.scope: Deactivated successfully. Sep 5 06:05:29.727553 systemd-logind[1502]: Session 16 logged out. Waiting for processes to exit. Sep 5 06:05:29.730327 systemd[1]: Started sshd@16-10.0.0.144:22-10.0.0.1:48970.service - OpenSSH per-connection server daemon (10.0.0.1:48970). Sep 5 06:05:29.731870 systemd-logind[1502]: Removed session 16. Sep 5 06:05:29.785904 sshd[5521]: Accepted publickey for core from 10.0.0.1 port 48970 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:29.786519 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:29.790604 systemd-logind[1502]: New session 17 of user core. Sep 5 06:05:29.797514 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 5 06:05:29.935497 sshd[5524]: Connection closed by 10.0.0.1 port 48970 Sep 5 06:05:29.935856 sshd-session[5521]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:29.939554 systemd[1]: sshd@16-10.0.0.144:22-10.0.0.1:48970.service: Deactivated successfully. Sep 5 06:05:29.941544 systemd[1]: session-17.scope: Deactivated successfully. Sep 5 06:05:29.942305 systemd-logind[1502]: Session 17 logged out. Waiting for processes to exit. Sep 5 06:05:29.944628 systemd-logind[1502]: Removed session 17. Sep 5 06:05:34.950485 systemd[1]: Started sshd@17-10.0.0.144:22-10.0.0.1:44788.service - OpenSSH per-connection server daemon (10.0.0.1:44788). Sep 5 06:05:35.005543 sshd[5544]: Accepted publickey for core from 10.0.0.1 port 44788 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:35.006694 sshd-session[5544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:35.010183 systemd-logind[1502]: New session 18 of user core. Sep 5 06:05:35.017512 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 5 06:05:35.137202 sshd[5547]: Connection closed by 10.0.0.1 port 44788 Sep 5 06:05:35.137533 sshd-session[5544]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:35.140839 systemd[1]: sshd@17-10.0.0.144:22-10.0.0.1:44788.service: Deactivated successfully. Sep 5 06:05:35.142464 systemd[1]: session-18.scope: Deactivated successfully. Sep 5 06:05:35.144866 systemd-logind[1502]: Session 18 logged out. Waiting for processes to exit. Sep 5 06:05:35.145915 systemd-logind[1502]: Removed session 18. Sep 5 06:05:40.148522 systemd[1]: Started sshd@18-10.0.0.144:22-10.0.0.1:36234.service - OpenSSH per-connection server daemon (10.0.0.1:36234). Sep 5 06:05:40.204434 sshd[5563]: Accepted publickey for core from 10.0.0.1 port 36234 ssh2: RSA SHA256:xkXFnONh5NSK++8uJoUtZG7bJt4aRGla06bk3BJ3qjE Sep 5 06:05:40.205509 sshd-session[5563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 5 06:05:40.208951 systemd-logind[1502]: New session 19 of user core. Sep 5 06:05:40.218514 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 5 06:05:40.343296 sshd[5566]: Connection closed by 10.0.0.1 port 36234 Sep 5 06:05:40.343812 sshd-session[5563]: pam_unix(sshd:session): session closed for user core Sep 5 06:05:40.347862 systemd[1]: sshd@18-10.0.0.144:22-10.0.0.1:36234.service: Deactivated successfully. Sep 5 06:05:40.349550 systemd[1]: session-19.scope: Deactivated successfully. Sep 5 06:05:40.351903 systemd-logind[1502]: Session 19 logged out. Waiting for processes to exit. Sep 5 06:05:40.353127 systemd-logind[1502]: Removed session 19.